As with graphs and other things in math, we can represent DFAs in a different manner than drawing out the automata. More specifically, we can represent them using tables. It essentially works like a "map" that we use in programming languages.
For each character, look it up in the table with the assiociated transition and it will tell you where to go. Example:
This DFA:
Can be represented using this table:
But what about the accepting state and starting state?
Why use this representation? Well this is how we can define a DFA in a programming language. Additionally, representing tough problems using tabular DFAs intead of automaton DFAs could be useful.
So what problems can we solve with a DFA? To work towards such a question, we define regular languages as:
Regular Languages:
A language is called a regular language if there exists a DFA such that .
More so, If is a language and , we say that recognizes the language .
Distilling this further, is a regular language if all strings are accepted by the DFA and rejects everything else.
So in an exact sense, since we recall that languages represent problems, if we can build some DFA for a language, it is a regular language. Contrapositively, we are saying that computers can solve all problems that are regular languages.
Let's probe the properties of regular languages further.
Complement of a Language:
Given a language , the complement of that language (denoted ) is the language of all strings in that aren't in .
Formally:
A good visualization of this:
Interestingly, you can design a DFA for (recall that if we do have a DFA for in the first place, then we know it is a regular language). To do so, flip each state to its opposite (if it is an accepting state, flip it to a normal state and vise versa). Now here is the cool part: since we know is a regular language since we have a DFA for it, we also know that is a regular language as well since we also have a DFA for it. This is known as the closure property and so we say that regular languages are closed under completion.
Our motivation for regular languages was to figure out what problems can we solve with a computer. Since we defined a regular language as "can you design a DFA for the given language?", this means that all regular language problems are solvable whereas nonregular language problems are not solvable. We will eventually get to nonregular languages, but I thought I'd drop in this note to keep in mind.
Now let us switch gears a little bit.
Recall from DFAs, we discussed two problems: what happens when you have two transitions a state for the same character or have no transitions for a character at all? For DFAs, we just defined away these concerns. But now we get to NFAs where we make these legal.
Let us talk about the notion of determinsim more deeply.
- A model of computation is conidered deterministic if at every point in the computation, there is exactly one choice that can make.
- A model of computation is conidered nondeterministic if the computing machine has zero or multiple decisions that it can make at one point.
Now nondeterminism will lead us to allowing multiple paths (which we'll see in a second), but the accepting states remain the same. So we consider a string accepted if any of these choices leads to an accepting state. So basically keep trying all possible paths and see if you get an acceptance. This is technically referred to as "existential nondeterminism".
Funnily enough, if we encounter a scenario we can't trace any further due to their being no transition for the current character in the NFA, we consider the NFA dead and declare a rejection.
Another interesting fact about NFAs as opposed to DFAs, is that flipping the accept and reject states of an NFA does not always give you an NFA for the complement of the original language.
Also note that DFAs are also NFAs but not always vise versa.
We now introduce another trait of NFAs: they can have -transitions.
I think Keith explains this part well in lecture at 55:00, but briefly speaking:
An NFA may follow any number of -transitions at any time without consuming any input.
This means that when tracing an input string, we can choose to hop over to a different state via an -transition if it exists without moving onto the next character in the string.
Note that you do not have to take the -transition--it is simply another option to the to the nondeterminism repertoire.
Also note that the effective difference in the -transition vs. the -transition is that the -transition eats up on character from the string while tracing through the NFA.
Why do we have these? Well one area that we'll use them for is for "combining" two machines together. We will be doing this later on.
Ok so we have got a good amount of rules instilled within us about NFAs. We should now think intuitively about such rules. With that, we introduce two frameworks for thinking about NFAs.
This framework lies on the fact that "we can view nondeterministic machines as having Magic Superpowers that enable them
to guess choices that lead to an accepting state.".
This is pretty philosophically deep. Essentially, if there exists some acceptance path, than an NFAs will guess it eventually. The opposite is true as well.
This is quite a conveniant trick for tracing through NFAs. It states that, when tracing through NFAs, if you encounter a transition which includes a choice, see what happens with all choices (so you are really doing more than one tracing here).
Note: see 1:05 of lecture for a much better explanation walking through an example.
Formally intuiting this framework:
Even more rigorousally, we can enumerate the steps of tracing through an NFA using parallelism:
A tip: embrace the nondeterminism.
Another model to follow is to simply guess-and-check.
More specifically:
Sometimes it is easier to design an NFA for a specific language than a DFA!.
See worked through examples beginning at 1:10 in lecture.