Hashtable/dictionary/map lookup with regular expressions

I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):

>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 } >>> regex_dict['food'] 12 >>> regex_dict['foot in my mouth'] 12 >>> regex_dict['FileNotFoundException: file.x does not exist'] 35

(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)

I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).

Is there a way to accomplish this, without resorting to O(n) efficiency?

Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.

(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)

Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.

I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.


What you want to do is very similar to what is supported by xrdb. They only support a fairly minimal notion of globbing however.

Internally you can implement a larger family of regular languages than theirs by storing your regular expressions as a character trie.

  • single characters just become trie nodes.
  • .'s become wildcard insertions covering all children of the current trie node.
  • *'s become back links in the trie to node at the start of the previous item.
  • [a-z] ranges insert the same subsequent child nodes repeatedly under each of the characters in the range. With care, while inserts/updates may be somewhat expensive the search can be linear in the size of the string. With some placeholder stuff the common combinatorial explosion cases can be kept under control.
  • (foo)|(bar) nodes become multiple insertions

This doesn't handle regexes that occur at arbitrary points in the string, but that can be modeled by wrapping your regex with .* on either side.

Perl has a couple of Text::Trie -like modules you can raid for ideas. (Heck I think I even wrote one of them way back when)

This is not possible to do with a regular hash table in any language. You'll either have to iterate through the entire keyset, attempting to match the key to your regex, or use a different data structure.

You should choose a data structure that is appropriate to the problem you're trying to solve. If you have to match against any arbitrary regular expression, I don't know of a good solution. If the class of regular expressions you'll be using is more restrictive, you might be able to use a data structure such as a trie or suffix tree.

In the general case, what you need is a lexer generator. It takes a bunch of regular expressions and compiles them into a recognizer. "lex" will work if you are using C. I have never used a lexer generator in Python, but there seem to be a few to choose from. Google shows PLY, PyGgy and PyLexer.

If the regular expressions all resemble each other in some way, then you may be able to take some shortcuts. We would need to know more about the ultimate problem that you are trying to solve in order to come up with any suggestions. Can you share some sample regular expressions and some sample data?

Also, how many regular expressions are you dealing with here? Are you sure that the naive approach won't work? As Rob Pike once said, "Fancy algorithms are slow when n is small, and n is usually small." Unless you have thousands of regular expressions, and thousands of things to match against them, and this is an interactive application where a user is waiting for you, you may be best off just doing it the easy way and looping through the regular expressions.

This is definitely possible, as long as you're using 'real' regular expressions. A textbook regular expression is something that can be recognized by a deterministic finite state machine, which primarily means you can't have back-references in there.

There's a property of regular languages that "the union of two regular languages is regular", meaning that you can recognize an arbitrary number of regular expressions at once with a single state machine. The state machine runs in O(1) time with respect to the number of expressions (it runs in O(n) time with respect to the length of the input string, but hash tables do too).

Once the state machine completes you'll know which expressions matched, and from there it's easy to look up values in O(1) time.

What about the following:

class redict(dict):
def __init__(self, d):
dict.__init__(self, d)

def __getitem__(self, regex):
r = re.compile(regex)
mkeys = filter(r.match, self.keys())
for i in mkeys:
yield dict.__getitem__(self, i)

It's basically a subclass of the dict type in Python. With this you can supply a regular expression as a key, and the values of all keys that match this regex are returned in an iterable fashion using yield.

With this you can do the following:

>>> keys = ["a", "b", "c", "ab", "ce", "de"]
>>> vals = range(0,len(keys))
>>> red = redict(zip(keys, vals))
>>> for i in red[r"^.e$"]:
... print i

Here's an efficient way to do it by combining the keys into a single compiled regexp, and so not requiring any looping over key patterns. It abuses the lastindex to find out which key matched. (It's a shame regexp libraries don't let you tag the terminal state of the DFA that a regexp is compiled to, or this would be less of a hack.)

The expression is compiled once, and will produce a fast matcher that doesn't have to search sequentially. Common prefixes are compiled together in the DFA, so each character in the key is matched once, not many times, unlike some of the other suggested solutions. You're effectively compiling a mini lexer for your keyspace.

This map isn't extensible (can't define new keys) without recompiling the regexp, but it can be handy for some situations.

# Regular expression map
# Abuses match.lastindex to figure out which key was matched
# (i.e. to emulate extracting the terminal state of the DFA of the regexp engine)
# Mostly for amusement.
# Richard Brooksby, Ravenbrook Limited, 2013-06-01

import re

class ReMap(object):

def __init__(self, items):
if not items:
items = [(r'epsilon^', None)] # Match nothing
key_patterns = []
self.lookup = {}
index = 1
for key, value in items:
# Ensure there are no capturing parens in the key, because
# that would mess up match.lastindex
key_patterns.append('(' + re.sub(r'\((?!\?:)', '(?:', key) + ')')
self.lookup[index] = value
index += 1
self.keys_re = re.compile('|'.join(key_patterns))

def __getitem__(self, key):
m = self.keys_re.match(key)
if m:
return self.lookup[m.lastindex]
raise KeyError(key)

if __name__ == '__main__':
remap = ReMap([(r'foo.', 12), (r'FileN.*', 35)])
print remap['food']
print remap['foot in my mouth']
print remap['FileNotFoundException: file.x does not exist']

What happens if you have a dictionary such as

regex_dict = { re.compile("foo.*"): 5, re.compile("f.*"): 6 }

In this case regex_dict["food"] could legitimately return either 5 or 6.

Even ignoring that problem, there's probably no way to do this efficiently with the regex module. Instead, what you'd need is an internal directed graph or tree structure.

There is a Perl module that does just this Tie::Hash::Regex.

use Tie::Hash::Regex;
my %h;

tie %h, 'Tie::Hash::Regex';

$h{key} = 'value';
$h{key2} = 'another value';
$h{stuff} = 'something else';

print $h{key}; # prints 'value'
print $h{2}; # prints 'another value'
print $h{'^s'}; # prints 'something else'

print tied(%h)->FETCH(k); # prints 'value' and 'another value'

delete $h{k}; # deletes $h{key} and $h{key2};

@rptb1 you don't have to avoid capturing groups, because you can use re.groups to count them. Like this:

# Regular expression map
# Abuses match.lastindex to figure out which key was matched
# (i.e. to emulate extracting the terminal state of the DFA of the regexp engine)
# Mostly for amusement.
# Richard Brooksby, Ravenbrook Limited, 2013-06-01

import re

class ReMap(object):
def __init__(self, items):
if not items:
items = [(r'epsilon^', None)] # Match nothing
self.re = re.compile('|'.join('('+k+')' for (k,v) in items))
self.lookup = {}
index = 1
for key, value in items:
self.lookup[index] = value
index += re.compile(key).groups + 1

def __getitem__(self, key):
m = self.re.match(key)
if m:
return self.lookup[m.lastindex]
raise KeyError(key)

def test():
remap = ReMap([(r'foo.', 12),
(r'.*([0-9]+)', 99),
(r'FileN.*', 35),
print remap['food']
print remap['foot in my mouth']
print remap['FileNotFoundException: file.x does not exist']
print remap['there were 99 trombones']
print remap['food costs $18']
print remap['bar']

if __name__ == '__main__':

Sadly very few RE engines actually compile the regexps down to machine code, although it's not especially hard to do. I suspect there's an order of magnitude performance improvement waiting for someone to make a really good RE JIT library.

As other respondents have pointed out, it's not possible to do this with a hash table in constant time.

One approximation that might help is to use a technique called "n-grams". Create an inverted index from n-character chunks of a word to the entire word. When given a pattern, split it into n-character chunks, and use the index to compute a scored list of matching words.

Even if you can't accept an approximation, in most cases this would still provide an accurate filtering mechanism so that you don't have to apply the regex to every key.

A special case of this problem came up in the 70s AI languages oriented around deductive databases. The keys in these databases could be patterns with variables -- like regular expressions without the * or | operators. They tended to use fancy extensions of trie structures for indexes. See krep*.lisp in Norvig's Paradigms of AI Programming for the general idea.

If you have a small set of possible inputs, you can cache the matches as they appear in a second dict and get O(1) for the cached values.

If the set of possible inputs is too big to cache but not infinite, either, you can just keep the last N matches in the cache (check Google for "LRU maps" - least recently used).

If you can't do this, you can try to chop down the number of regexps you have to try by checking a prefix or somesuch.

I created this exact data structure for a project once. I implemented it naively, as you suggested. I did make two immensely helpful optimizations, which may or may not be feasible for you, depending on the size of your data:

  • Memoizing the hash lookups
  • Pre-seeding the the memoization table (not sure what to call this... warming up the cache?)

To avoid the problem of multiple keys matching the input, I gave each regex key a priority and the highest priority was used.

The fundamental assumption is flawed, I think. you can't map hashes to regular expressions.

I don't think it's even theoretically possible. What happens if someone passes in a string that matches more than 1 regular expression.

For example, what would happen if someone did:

>>> regex_dict['FileNfoo']

How can something like that possibly be O(1)?

It may be possible to get the regex compiler to do most of the work for you by concatenating the search expressions into one big regexp, separated by "|". A clever regex compiler might search for commonalities in the alternatives in such a case, and devise a more efficient search strategy than simply checking each one in turn. But I have no idea whether there are compilers which will do that.

It really depends on what these regexes look like. If you don't have a lot regexes that will match almost anything like '.*' or '\d+', and instead you have regexes that contains mostly words and phrases or any fixed patterns longer than 4 characters (e.g.'a*b*c' in ^\d+a\*b\*c:\s+\w+) , as in your examples. You can do this common trick that scales well to millions of regexes:

Build a inverted index for the regexes (rabin-karp-hash('fixed pattern') -> list of regexes containing 'fixed pattern'). Then at matching time, using Rabin-Karp hashing to compute sliding hashes and look up the inverted index, advancing one character at a time. You now have O(1) look-up for inverted-index non-matches and a reasonable O(k) time for matches, k is the average length of the lists of regexes in the inverted index. k can be quite small (less than 10) for many applications. The quality (false positive means bigger k, false negative means missed matches) of the inverted index depends on how well the indexer understands the regex syntax. If the regexes are generated by human experts, they can provide hints for contained fixed patterns as well.

Ok, I have a very similar requirements, I have a lot of lines of different syntax, basically remark lines and lines with some codes for to use in a process of smart-card format, also, descriptor lines of keys and secret codes, in every case, I think that the "model" pattern/action is the beast approach for to recognize and to process a lot of lines.
I'm using C++/CLI for to develop my assembly named LanguageProcessor.dll, the core of this library is a lex_rule class that basically contains :

  • a Regex member
  • an event member

The constructor loads the regex string and call the necessary codes for to build the event on the fly using DynamicMethod, Emit and Reflexion... also into the assembly exists other class like meta and object that constructs ans instantiates the objects by the simple names of the publisher and the receiver class, receiver class provides the action handlers for each rule matched.

Late, I have a class named fasterlex_engine that build a Dictionary<Regex, action_delegate> that load the definitions from an array for to run.

The project is in advanced point but I'm still building, today. I will try to enhance the performance of running surrounding the sequential access to every pair foreach line input, thru using some mechanism of lookup the dictionary directly using the regexp like:

map_rule[gcnew Regex("[a-zA-Z]")];

Here, some of segments of my code:

public ref class lex_rule: ILexRule
Exception ^m_exception;
Regex ^m_pattern;

//BACKSTORAGE delegates, esto me lo aprendi asiendo la huella.net de m*e*da JEJE
yy_lexical_action ^m_yy_lexical_action;
yy_user_action ^m_yy_user_action;

virtual property String ^short_id;
void init(String ^_short_id, String ^well_formed_regex);

lex_rule(String ^_short_id,String ^well_formed_regex);
virtual event yy_lexical_action ^YY_RULE_MATCHED
virtual void add(yy_lexical_action ^_delegateHandle)
virtual void remove(yy_lexical_action ^)

virtual long raise(String ^id_rule, String ^input_string, String ^match_string, int index)
long lReturn=-1L;
lReturn=m_yy_lexical_action(id_rule,input_string, match_string, index);
return lReturn;

Now the fasterlex_engine class that execute a lot of pattern/action pair:

public ref class fasterlex_engine
Dictionary<String^,ILexRule^> ^m_map_rules;
fasterlex_engine(array<String ^,2>^defs);
Dictionary<String ^,Exception ^> ^load_definitions(array<String ^,2> ^defs);
void run();

AND FOR TO DECORATE THIS TOPIC..some code of my cpp file:

this code creates a constructor invoker by parameter sign

inline Exception ^object::builder(ConstructorInfo ^target, array<Type^> ^args)
DynamicMethod ^dm=gcnew DynamicMethod(
ILGenerator ^il=dm->GetILGenerator();
il->Emit(OpCodes::Call,Object::typeid->GetConstructor(Type::EmptyTypes)); //invoca a constructor base
il->Emit(OpCodes::Newobj, target); //NewObj crea el objeto e invoca al constructor definido en target
method_handler=(method_invoker ^) dm->CreateDelegate(method_invoker::typeid);
catch (Exception ^e)
return e;
return nullptr;


This code attach an any handler function (static or not) for to deal with a callback raised by matching of a input string

Delegate ^connection_point::hook(String ^receiver_namespace,String ^receiver_class_name, String ^handler_name)
Delegate ^d=nullptr;
if(connection_point::waitfor_hook<=m_state) // si es 0,1,2 o mas => intenta hookear
Type ^tmp=meta::_class(receiver_namespace+"."+receiver_class_name);


array<Object^> ^add_handler_args={d};
m_add_handler->Invoke(m_publisher_object, add_handler_args);
catch(Exception ^e)
throw gcnew Exception(e->ToString()) ;
return d;

finally the code that call the lexer engine:

array<String ^,2> ^defs=gcnew array<String^,2> {/* shortID pattern namespc clase fun*/
{"LETRAS", "[A-Za-z]+" ,"prueba", "manejador", "procesa_directriz"},
{"INTS", "[0-9]+" ,"prueba", "manejador", "procesa_comentario"},
{"REM", "--[^\\n]*" ,"prueba", "manejador", "nullptr"}
}; //[3,5]

//USO EL IDENTIFICADOR ESPECIAL "nullptr" para que el sistema asigne el proceso del evento a un default que realice nada
fasterlex_engine ^lex=gcnew fasterlex_engine();
Dictionary<String ^,Exception ^> ^map_error_list=lex->load_definitions(defs);

The problem has nothing to do with regular expressions - you'd have the same problem with a dictionary with keys as functions of lambdas. So the problem you face is figuring is there a way of classifying your functions to figure which will return true or not and that isn't a search problem because f(x) is not known in general before hand.

Distributed programming or caching answer sets assuming there are common values of x may help.

-- DM

Category:python Time:2008-11-03 Views:0

Related post

  • Limit user input to allowable comma delimited words with regular expression using javascript 2009-09-03

    I want to force the user to enter any combination of the following words. the words need to be comma delimited and no comma at the beginning or end of the string the user should only be able to enter one of each word. Examples admin basic,ectech admi

  • Regular Expression to search for the "Not Existence" of a pattern at the end of a URL 2011-10-05

    I am writing a Rational Functional Testing (RFT) script using Java language where I am trying to create an object in my object map with a regular expression not to match a certain pattern. The URL which I want not to match will look something like: h

  • Map.lookup - type declaration error 2012-02-24

    I am working with code: test2 :: Int -> Map Int Int -> Int test2 key cache | Map.member key cache = Map.lookup key cache | otherwise = 0 Here I want to check the existance of Int in the Map and lookup the value if it exists. But I get error: Co

  • F# Mapping Regular Expression Matches with Active Patterns 2011-04-16

    I found this useful article on using Active Patterns with Regular Expressions: http://www.markhneedham.com/blog/2009/05/10/f-regular-expressionsactive-patterns/ The original code snippet used in the article was this: open System.Text.RegularExpressio

  • Mapping Strings to Functionality Using Regular Expressions 2011-11-26

    I have a string and multiple regex's, for example a regex that checks if the string is a number only, if it starts with character X and so on. I have different codes running depending on what regex gets matched, like this: if (Regex.IsMatch(myString,

  • Regular expression replace $(Key) in text file with value from dictionary 2012-03-30

    I want to do something like this, I have a dictionary with keys and values, and a text file like this: Hello my name is $(Name) and my favorite movie is $(Movie). I want to read in the file and replace the occurrences of $(...) based on the values in

  • Match Regular expression from a dictionary in C# 2009-09-10

    I am trying to have some sort of Data Object (I'm thinking a dictionary) to hold a TON of regular expressions as keys, then I need to take a string of text, and match against them to get the actual value from the Dictionary. I need an efficient way t

  • Using Regular Expressions for Verilog Port Mapping 2010-04-16

    I have a really long port map where I want to replace a bunch of SignalName[i], with .SignalName(SignalName[i]), I think I can do this easily with regular expressions, but I can't for the life of me figure out how. Any ideas? --------------Solutions-

  • Regular expression to match Google Maps URLs 2012-03-25

    Am new with regular expression I want to compare string of url which must match a specific Url, if it match it return true false otherwise. e.g My url is http(s)://map.google.{ any letter here }/maps must strickly match the above expression in that f

  • Can I substitute multiple items in a single regular expression in VIM or Perl? 2009-04-19

    Let's say I have string "The quick brown fox jumps over the lazy dog" can I change this to "The slow brown fox jumps over the energetic dog" with one regular expression? Currently, I use two sets of regular expressions for this situation. (In this ca

  • Is there a way to really pickle compiled regular expressions in python? 2010-10-27

    I have a python console application that contains 300+ regular expressions. The set of regular expressions is fixed for each release. When users run the app, the entire set of regular expressions will be applied anywhere from once (a very short job)

  • Use cases for regular expression find/replace 2008-09-09

    I recently discussed editors with a co-worker. He uses one of the less popular editors and I use another (I won't say which ones since it's not relevant and I want to avoid an editor flame war). I was saying that I didn't like his editor as much beca

  • How can I increase the performance in a map lookup with key type std::string? 2008-11-01

    I'm using an std::map (VC++ implementation) and it's a little slow for lookups via the map's find method. The key type is an std::string. Can I increase the performance of this std::map lookup via a custom key compare override for the map? For exampl

  • how to replace parts of string using regular expressions 2010-09-24

    I am not a beginner to regular expressions, but their use in perl seems a bit different than in Java. Anyways, I basically have a dictionary of shorthand words and their definitions. I want to iterate over words in the dictionary and replace them wit

  • Regular Expression (Javascript) - Take a scrambled word and find an unscrambled match 2011-05-07

    I have a list of all the words in the English dictionary (270,000+ words) stored in a variable called theList. I have a scrambled word word that I want to unscramble by matching against the word list. Initially, I thought that the following code woul

  • Regular Expression with multiple words (in any order) without repeat 2011-10-10

    I'm trying to execute a search of sorts (using JavaScript) on a list of strings. Each string in the list has multiple words. A search query may also include multiple words, but the ordering of the words should not matter. For example, on the string "

  • Regular expression to find syllables in Bengali word 2012-04-17

    Here is the code: BanglaAlphabet = { 'Consonant' : '[\u0995-\u09B9\u09CE\u09DC-\u09DF]', 'IndependantVowel' : '[\u0985-\u0994]', 'DependantVowel' : '[\u09BE-\u09CC\u09D7]', 'Nukta' : '[\u09BC]' } BanglaWordPattern = ur"""( ({DependantVowel}{Nukta}{Co

  • Regular expression in Ruby - extracting from Gutenberg 2015-02-02

    I am fairly new to Ruby and I am struggling with a regular expression to seed a database from this text file: http://www.gutenberg.org/cache/epub/673/pg673.txt. I want the <h1> tags as the words for the dictionary database, and the <def>

  • Regular expression that matches valid IPv6 addresses 2008-09-10

    I'm having trouble writing a regular expression that matches valid IPv6 addresses, including those in their compressed form (with :: or leading zeros omitted from each byte pair). Can someone suggest a regular expression that would fulfill the requir

Copyright (C) pcaskme.com, All Rights Reserved.

processed in 0.637 (s). 13 q(s)