CRAWDAD dartmouth/outdoor

Citation Author(s):
BAE Systems
Dartmouth College
Florida International University
Dartmouth College
Dartmouth College
Submitted by:
Last updated:
Tue, 11/28/2006 - 07:00
Data Format:
0 ratings - Please login to submit your rating.


MANET dataset of outdoor experments for comparing differnet routing algorithms.

This dataset contains outdoor runs of MANET (Mobile Ad-hoc network) routing algorithms to compare the performance of four different routing algorithms.

date/time of measurement start: 2003-10-17

date/time of measurement end: 2003-10-17

collection environment: Most comparisons of wireless ad hoc routing algorithms involve simulated or indoor trial runs, or outdoor runs with only a small number of nodes, potentially leading to an incorrect picture of algorithm performance. For outdoor comparison of four different routing algorithms, APRL, AODV, ODMRP, and STARA, we run on top of thirty-three 802.11-enabled laptops moving randomly through an athletic field. This comparison provides insight into the behavior of ad hoc routing algorithms at larger real-world scales than have been considered so far.

The outdoor routing experiment took place on a rectangular athletic field measuring approximately 225 (north-south) by 365 (eastwest) meters. This field can be roughly divided into four flat, equalsized sections, three of which are at the same altitude, and one of which (at the southeast corner) is approximately four to six meters lower. There was a short, steep slope between the upper and lower sections. We chose this particular athletic field because it was physically distant from campus and the campus wireless network, reducing potential interference.

network configuration: We configured the 802.11 cards to use wireless channel 9 for maximum separation from the standard channels of 1, 6 and 11, further reducing potential interference. We used 41 laptops, 40 as application laptops, and one as a control laptop.

The routing experiments ran on top of a set of 41 Gateway Solo 9300 laptops, each with a 10GB disk, 128MB of main memory, and a 500MHz Intel Pentium III CPU with 256KB of cache. We used one laptop to control each experiment, leaving 40 laptops to actually run the ad hoc routing algorithms. Each laptop ran Linux kernel version 2.2.19 with PCMCIA card manager version 3.2.4 and had a Lucent (Orinoco) Wavelan Turbo Gold 802.11b wireless card. Although these cards can transmit at different bit rates, can auto-adjust this bit rate depending on the observed signal-to-noise ratio, and can auto-adjust the channel to arrive at a consistent channel for all the nodes in the ad hoc network, we used an ad hoc mode in which the transmission rate was fixed at 2 Mb/s, and in which the channel could be chosen manually but was fixed thereafter. Specifically, we used Lucent (Orinoco) firmware version 4.32 and the proprietary ad hoc "demo" mode originally developed by Lucent.

Although the demo mode has been deprecated in favor of the IEEEdefined IBSS, we used the demo mode to ensure consistency with a series of ad hoc routing experiments of which this outdoor experiment was a culminating event. 6 The fixed rate also made it much easier to analyze the routing results, since we did not need to account for automatic changes in each card's transmission rate. On the other hand, we would expect to see variation in the routing results if we had used IBSS instead, both due to its multi-rate capabilities and its general improvements over the demo mode. The routing results remain representative, however, since demo mode provides sufficient functionality to serve as a reasonable data-link layer. Finally, each laptop had a Garmin eTrex GPS unit attached via the serial port. These GPS units did not have differential GPS capabilities, but were accurate to within thirty feet during the experiment.

data collection methodology: We log the events of routing algorithms in each laptop. A GPS service runs on each laptop, reading and recording the current laptop position from the attached GPS unit.

disruptions to data collection: During the experiment, seven laptops generated no network traffic due to hardware and configuration issues, and an eighth laptop generated the position beacons only for the first half of the experiment. The seven complete failures left thirty-three laptops actually participating in the ad hoc routing.

hole: During the experiment, seven laptops generated no network traffic due to hardware and configuration issues, and an eighth laptop generated the position beacons only for the first half of the experiment. The seven complete failures left thirty-three laptops actually participating in the ad hoc routing.

last modified: 2006-11-28

reason for most recent change: the initial version

release date: 2006-11-06

Traceset. dartmouth/outdoor/routing

Traceset of outdoor MANET experments for comparing differnet routing algorithms.

  • description: This traceset contains outdoor runs of MANET (Mobile Ad-hoc network) routing algorithms to compare the performance of four different routing algorithms.
  • measurement purpose: Network Performance Analysis, Routing Protocol
  • methodology: The outdoor routing experiment took place on a rectangular athletic field measuring approximately 225 (north-south) by 365 (eastwest) meters. This field can be roughly divided into four flat, equalsized sections, three of which are at the same altitude, and one of which (at the southeast corner) is approximately four to six meters lower. There was a short, steep slope between the upper and lower sections. We chose this particular athletic field because it was physically distant from campus and the campus wireless network, network, reducing potential interference. In addition, we configured the 802.11 cards to use wireless channel 9 for maximum separation from the standard channels of 1, 6 and 11, further reducing potential interference. We used 41 laptops, 40 as application laptops, and one as a control laptop. The GPS service on each laptop recorded the current position (latitude, longitude and altitude) once per second, and synchronized the laptop clock with the GPS clock to provide sub-second, albeit not millisecond, time synchronization. Every three seconds, the GPS service broadcast a beacon containing its own position and any other positions about which it knew. Three seconds is shorter than strictly necessary for displaying accurate positions to soldiers or first responders, but was necessary to build a reasonably accurate connectivity trace. Each beacon contained at most 41 position records of 21 bytes each, and had a maximum payload of 861 bytes. The traffic generator on each laptop generated packet streams with a mean packet size of 1200 bytes (including UDP, IP and Ethernet headers), a mean of approximately 5.5 packets per stream, a mean delay between streams of 15 seconds, and a mean delay between packets of approximately 3 seconds. These parameters produced approximately 423 bytes of data traffic (including UDP, IP and Ethernet headers) per laptop per second, a relatively modest traffic volume, but corresponding to the traffic volume observed during trial runs of one of our prototype military applications. Each of the routing algorithms, APRL, AODV, ODMRP and STARA, ran for fifteen minutes with a two-minute period between successive routing algorithms to handle setup and cleanup chores. Fifteen minutes per algorithm leads to an overall experiment time of approximately an hour and a half (given the time needed for initial boot and experiment startup), corresponding to the maximum reliable lifetime of our laptop batteries. The traffic generator ran for thirteen minutes of each fifteen-minute period, starting one minute after the routing algorithm to allow the pro-active routing algorithms to reach a stable state.

dartmouth/outdoor/routing 20031017 trace  


Description:  This trace contains outdoor runs of MANET (Mobile Ad-hoc network) routing algorithms to compare the performance of four different routing algorithms.


- Software configuration

To allow as accurate a comparison as possible, we needed the implementations of the four algorithms to be as similar as possible. If one algorithm operated in kernel space and another operated in user space, for example, it would not be possible to attribute any difference in packet latencies solely to the way in which an algorithm finds and uses its routes. For this reason, although we used existing source code as a guide in all four cases, we implemented each algorithm from scratch so that we could minimize any implementation differences. There are four key features that the implementations have in common.

Key Feature 1.

All four algorithms are implemented as user-level applications through the use of a tunnel device. The tunnel device, which we ported from FreeBSD, provides a network interface on one end and a file interface, specifically a /dev entry, on the other end. Each node is assigned two IP addresses, one associated with the physical network device, and one associated with the tunnel or virtual network device. Applications use the virtual IP address, routing algorithms use the physical IP address, and the standard Linux routing tables route all virtual IP addresses to the virtual network interface. Any application-level packets, therefore, are directed through the tunnel, and the routing algorithms read those packets from the file end of the tunnel. 

The tunnel allowed us to avoid any implementation work at the kernel or driver level, and also to switch from one routing algorithm to another (during experiments) simply by stopping one user-level process and starting another. The drawback of our approach is the additional overhead associated with moving packets between kernel and user space. Our laptops, however, had more than enough capacity for our experiments, and thus we chose implementation simplicity over maximum performance.

Key Feature 2. 

All four algorithms use UDP for traffic destined for a specific neighbor and multicast IP for traffic that should reach every neighbor. Multicast IP, as opposed to broadcast IP, allows us to run multiple routing algorithms at the same time without adding filtering code to every algorithm, a useful feature in some of our earlier experiments. Each algorithm simply subscribes to its own multicast address.

Key Feature 3. 

All four algorithms use an event loop that invokes algorithm-specific handlers in response to (1) incoming UDP or multicast IP network traffic or (2) the expiration of route or other timeouts. As with user-level routing, the event-loop approach leads to additional overhead, but allows more straightforward implementations.

Key Feature 4. 

All four algorithms are implemented in C++ and share a core set of classes. These classes include the event loop, as well as unicast and multicast, routing, and logging support.

With these four key features, algorithm-specific code is confined to the packet handler classes that process incoming control and data packets, the timer handler classes that process timed actions (such as route expiration), the logging classes that log algorithm events, and utility classes that serialize and unserialize control packets. Minimizing the algorithm-specific code simplified implementation and debugging, and should make the routing results as independent of a particular implementation choice as possible.

- Traffic generation 

The routing algorithms themselves are not enough for an experiment, of course. A traffic generator runs on each laptop, and sends a sequence of packet streams to randomly selected destination laptops. Each stream contains a random number of packets of a random size. Two Gaussian distributions determine the packet numbers and sizes, two exponential distributions determine the delay between streams and packets, and a uniform distribution determines the destination laptops. A GPS service also runs on each laptop, reading and recording the current laptop position from the attached GPS unit, and broadcasting beacons that contain the laptop's position (as well as sequence-numbered positions that it has received from other laptops). We thus are flooding the GPS beacons through the network, an appropriate choice in our application domain where soldiers and first responders need to see a continuousview of each other's positions. In addition, broadcasting the beacons allows us to build a connectivity graph, independent of any particular routing algorithm, as to which laptops actually can hear which other laptops. Finally, we use a set of Tcl scripts to setup and run the experiments.

- Parameter configuration

The best parameters for the four routing algorithms could not be determined precisely beforehand, since there are no experiments of this size for which data is available. Instead, we used values that gave effective results in published simulation studies or that were set as default values in the sample source code obtained from thedevelopers. We did adjust some values to achieve some degree of consistency between the algorithms, however. Summarizing the major parameters, APRL recorded up to seven routes per destination, one primary and six alternates, broadcast its beacons every 6 seconds, and expired any route that had not been refreshed by a beacon within the last 12 seconds. STARA broadcast a neighborhood probe every 2 seconds, sent a DDP if a path had gone unexplored for 6 seconds, removed a neighbor from a node's neighborhood set if two successive neighborhood probes passed without an acknowledgment, and weighed delay estimates by 0.9 on each update to exponentially forget old delay information as new information became available. AODV broadcast each RREQ twice, expired a route if it had not been used in 12 seconds, sent a HELLO every 6 seconds, and removed a neighbor from a node's neighborhood set if it did not receive two successive HELLOs. ODMRP refreshed an in-use route every 6 seconds, and expired a route (forwarding group) if it had not been used for 12 seconds.

As can be seen, these parameters reduce to 6 seconds between beacons, HELLO messages or route refreshes for APRL, AODV and ODMRP, and 12 seconds for a route to time out for APRL, AODV and ODMRP, either by direct timeout or by failure to receive two successive HELLOs. Using equivalent values for STARA, however, led to unacceptably slow convergence of the delay estimates, particularly given the fifteen-minute window available to us for each algorithm. Reducing the parameters improved convergence, but at the expense of even more control overhead, an effect that we consider below.

- User mobility

The laptops moved continuously during the experiment. At the start of the experiment, the participants were divided into equalsized groups of ten, each participant given a laptop, and each group instructed to randomly disburse in one of the four sections of the field (three upper and one lower). The participants then walked continuously, always picking a section different than the one in which they were currently located, picking a random position within that section, walking to that position in a straight line, and then repeating. This approach was chosen since it was simple, but still provided continuous movement to which the routing algorithms could react, as well as a similar spatial distribution across each algorithm.


The tarball contains both trace (under "Trace" directory) and the software that was used to collect the trace (under "Software" directory.) 

1. Trace

The trace consists of 41 directories corresponding to 41 nodes 

(node 1-40, and a master control node). 

Each node directory (e.g., 1) has four sub-directories: 

- AODV: traces of AODV experiment

- APRL: traces of APRL experiment

- ODMRP: traces of ODMRP experiment

- STARA : traces of STARA experiment

- gps_tcpdump: traces of GPS readings and tcpdump 


In gps_tcpdump directory (e.g., under a directory of node 1), 

you can find the following files:

full/beacon.log: beacon logs in binary format

full/position.log: position logs in binary format

full/position.log.parsed: each line in a format of [time] [x-coord] [y-coord] [z-coord]

full/position.log.stripped: with bogus positions removed from position.log.parsed, 

each line in the same format as position.log.parsed 

full/signal.log: signal logs in binary format 

monitor.stderr: stderr file used by monitor app

monitor.stdout: stdout file used by monitor app

tcmpdump.stderr : stderr file used by tcpdump app

tcpdump.stdout: stdout file used by tcpdump app

tcpdump.dat.gz : tcpdump file 


In each of PROTO (= AODV, APRL, ODMRP, and STARA) directories (e.g., under a directory of node 1), 

you can find the following files:

- transmission logs in binary format

- : user-level (through the use of tunnel device) transmission logs in text format

- routing-level transmission logs (including control packets) in text format 

- generated traffic logs in binary format

- generated traffic logs in text format

- trafgen.stderr: stderr file used by trafgen app 

- trafgen.stdout: stdout file used by trafgen app

- PROTO.stderr: stderr file used by PROTO app 

- PROTO.stdout: stdout file used by PROTO app 

Among these, and have the following format:

There are four types of line entries:

TIN  [size]  [seconds portion of timestamp]  [usecs portion of time stamp]  [src]  [dest]  [seq #]

SIN  [size]  [seconds portion of timestamp]  [usecs portion of time stamp]  [src]  [dest]  [previous_hop_ip]  [seq #]

TOUT  [size]  [seconds portion of timestamp]  [usecs portion of time stamp]  [src]  [dest]  [seq #]

SOUT  [size>  [seconds portion of timestamp]  [usecs portion of time stamp]  [src]  [dest]  [previous_hop_ip]  [seq #]

The TIN (In from Tunnel) entries describe a packet that was generated by this node's

traffic generation process and that is being passed down to the

routing layer to be sent out on its way.

The TOUT (Out over Tunnel) entries describe a packed generated by a traffic generation

process arriving safely at its destination

The SIN (In from Socket) and SOUT (Out over Socket) entires describe 

the transmission and receipt of a traffic generation packet, respectively, 

at a hop-by-hop level. That is, if a packet is generated by the traffic 

gen program at node 1 bound for node 3, and to get there it bounces 

from 1 to 2 then 2 to 3, then you will see a TIN for this packet at 1, 

a TOUT at 3, and a SOUT at 1, SIN at 2, SOUT at 2, and SIN at 3.

2. Software 



The software consists of the following four directories:

- AODV: AODV protocol implementation

- APRL: APRL protocol implementation

- ODMRP: ODMRP protocol implementation

- STARA : STARA protocol implementation

- analysis: scripts used for trace analysis


OVERVIEW of routing protocols

A short overview of the four routing algorithms (AODV, APRL, ODMRP, and STARA)

is available in Section 2 of [gray-outdoor]: 


BUILD - how to `make` each protocol 

To build each protocol (say AODV), first `cd AODV` and edit the makefile's definition

for the "depend" target. Change the path to the standard header include directory used

by your compiler (e.g., "/usr/include/g++-2/" for g++ on actcomm) to whatever it is 

on the machine you're compiling on. Then just run "make depend" and "make" inside

the AODV/ directory. If everything goes well, an executable named AODV and will

appear in the directory.


USAGE - how to run each protocol

To run a protocol (say AODV) simply run the command "./AODV <conf_file> (assuming that 

AODV and <conf_file> are in the current working directory). You can rename the <conf_file> 

to anything you wish, as long as you pass the correct name of the file as the argument to AODV.

If a protocol was compiled with debugging, detailed messages will be printed to the

standard output; you can send them to a file instead by using the "> redirect

operator. All error messages are output to standard error.

The following parameters and values are specified in <conf_file> 

  • AODV (The required parameters are:)
    • LOCAL_IP = Local IP address. 
    • MAX_DATA_PACKET_QUEUE_SIZE = Maximum size for the data packet queue.
    • MAX_AODV_PACKET_SIZE = Maximum size for AODV 
    • MAX_DATA_PACKET_SIZE = Maximum size for data packets.
    • NET_DIAMATER = maximum expected diameter of the MANET. Used in discovery.
    • NODE_TRAVERSAL_TIME = expected time for a packet to transit a -node- 
    • TTL_THRESHOLD = Number of TTL values for RREQ packets.
    • RREQ_RETRIES_WITH_TTL_AT_NET_DIAMETER = Number of RE-broadcasts for RREQ packets.
    • ACTIVE_ROUTE_TIMEOUT = Lifetimes for routes and cache entries.
    • HELLO_INTERVAL = Broadcast interval for Hello messages.
    • ALLOWED_HELLO_LOSS = lifetime for Hello messages.
  • (The optional parameters are:)
    • NET_TRAVERSAL_TIME = expected time for a packet to transit from one side of the network to another (i.e., across the maximum expected hops). Typically not specific in the config value. Calculated from NET_DIAMETER and NODE_TRAVERSAL_TIME
    • TTL_START, TTL_INCREMENT = start and delta ttl values.
    • LISTEN_TO_INPUT_FILE = ListenToTable input file
    • POSITION_INPUT_FILE = PositionTable input file


(The required parameters are:)

localIP = the local IP

(The optional parameters are:)

logfile = name of a file for logging info

err_logfile  = name of a file for logging errors

block_size = the maximum segment size of the network

route_cap = the maximum number of routes to any destination that are stored in the routing table

report_interval = the interval, in msec, at which we beacon the routing table

caching = (0 or 1) whether we're caching reverse hop data for PSVNs

buffering = (0 or 1) whether we are buffering packets

delay = the interval, in msec, after which reverse hop data times out

data_logfile = the log file in which to log info about data traffic

data_log_bufsize = the size of the buffer used in our buffered binary logging scheme

ListenTo_config_file = name of a file to set up the ListenToTable with

Position_config_file = name of file to set up the PositionTable with


LOCAL_IP = IP address of the local node

MAX_DATA_PACKET_SIZE = maximum size of a data packet in bytes (including IP hhead)

MAX_DATA_PACKET_QUEUE_SIZE = maximum number of data packets that we will queue. 

We drop data packets if the queue is full

NET_DIAMATER = maximum expected diameter of the MANET. Used in discovery.

NODE_TRAVERSAL_TIME = expected time for a packet to transit a -node- 

(in device to out device). Used in discovery.

NET_TRAVERSAL_TIME = expected time for a packet to transit from one side of 

the network to another (i.e., across the maximum expected hops). Typically not 

specific in the config value. Calculated from NET_DIAMETER and NODE_TRAVERSAL_TIME

TTL_START, TTL_INCREMENT, TTL_THRESHOLD = start, delta, and max ttl values.  Used in discovery.

JOIN_QUERY_RETRIES = number of times to retry a JOIN query. Typically not

specified in the config file. Instead calculated from the three TTL 

parameters (i.e., increasing TTL values from TTL_START to TTL_TRHESHOLD by

TTL_INCREMENT step size; always at least 1 retry no matter the values)


ODMRP will try the JOIN query this many more times with the ttl set to the 

net diametera

FWD_GROUP_LIFE = timeout on a forwarding group, i.e., node will stop being

a forwarder if the group information is not refreshed within this timeout.

Timeout in milliseconds.

REFRESH_INTERVAL = interval at which we refresh forwarding info. Involves

network traffic. Should not be too small. Must be smaller than FWD_GROUP_LIFE

(typically 2x to 3x smaller) for protocol to be stable. Timeout in milliseconds.

REV_ROUTE_LIFE = timeout on a forearding group. Timeout in milliseconds.


NUMBER_OF_HOSTS = The number of hosts in the ad hoc network

LOCAL_IP = The IP address of the host computer 

MAX_ACK_HOPS = The number of hops that a flooded acknowledgement

can travel before it dies. If set to 1, this will require bidirectional links 

to form routes 

PROBE_REPEAT_DELAY = The amount of time, in msec, between the sending of

NeighborhoodProbes to other nodes. 

DELAY_TOO_OLD = The amount pf time, in msec, before a route's delay

estimate is considered to be too old. After this time,

a DummyDataPacket will be sent to explore the route

PACKETS_BEFORE_LINK_FAILS = The number of packets that can be sent to a node, without

acknowledgement, before the link is determined to have failed

IP_LIST_FILENAME = The name of the file (must be in same directory as this file)

That contains a list of the IP addresses of all the hosts in the ad hoc network 

MULTICAST_GROUP = The multicast group

MULTICAST_PORT = The port that is used for sending via multicast (for NeighborhoodProbes and ACKs)

CONTROL_PORT = The port that is used for sending via unicast for all other control packets)

DATA_PORT = The port that is used for sending via unicast (for all data packets)

MTU = the maximum transfer unit for this network

FORGETTING_FACTOR = the exponential forgetting factor for delay updates.

This is a floating-point number between 0 and 1. The

smaller it is, the more weight is given to older

delay estimates when incorporating new measurements

into the expected delay for a route

TTL = Time-To-Live for DataPackets and DummyDataPackets in the

network. This is to prevent looping -- any packet that

has not reached its destination after this many hops

will be dropped.

MIN_DELAY = This value is used for normalizing the delays for probability

calculation purposes. It should be small enough that the

delays maintain differentiation, but large enough that they

are not overly differentiated. 

ListenTo_config_file listen = The location of the listen-to-table configuration file, 

if one is being used for simulation purposes.


USAGE - analysis scripts

The stats.tcl program produces summary statistics from a wireless run's

raw event data. The stats.tcl uses the parsed (text) files, not the unparsed

binary log files.

Before running stats.tcl, please make sure that the variable c_gnuplotExecutable

in conf file (e.g., stats-20031017.conf) is set to a correct path of your

gnuplot executable on your system.

stats.tcl takes five arguments, e.g.,

   ./stats.tcl ./stats-20031017.conf AODV 30 ../../Trace ./analysis-20031017

The first argument is the configuration file.

The second argument is the routing protocol for which you want statistics.

The third argument is the size - in seconds - of the data aggregation buckets,

e.g., generated graphs will summarize the data at this granularity.

The fourth argument is the directory that contains the parsed data files

from the run. This directory refers to one of the outdoor datasets that you

downloaded from crawdad.

The fifth argument is the directory in which you want to put the output

files, text and gnuplot graphs. stats.tcl creates this directory if it

does not exist.

The stats-20041013.conf file is an example configuration file. The

modifiable fields in this configuration file are:

c_gnuplotExecutable = the path of your GNUPLOT executable

c_templateDirectory = the directory that contains the .template files (same directory that contains stats.tcl)

c_dummySequence = a starting sequence number for DROP events (for which we did

not record a sequence number in the raw data)

c_algorithms = the valid routing protocols. Used only for additional error


c_excludedLaptops = laptops (i.e., node ids, which correspond to the

subdirectory names in the dataset) to exclude from the summary statistics 


The files in this directory are a CRAWDAD dataset hosted by IEEE DataPort. 

About CRAWDAD: the Community Resource for Archiving Wireless Data At Dartmouth is a data resource for the research community interested in wireless networks and mobile computing. 

CRAWDAD was founded at Dartmouth College in 2004, led by Tristan Henderson, David Kotz, and Chris McDonald. CRAWDAD datasets are hosted by IEEE DataPort as of November 2022. 

Note: Please use the Data in an ethical and responsible way with the aim of doing no harm to any person or entity for the benefit of society at large. Please respect the privacy of any human subjects whose wireless-network activity is captured by the Data and comply with all applicable laws, including without limitation such applicable laws pertaining to the protection of personal information, security of data, and data breaches. Please do not apply, adapt or develop algorithms for the extraction of the true identity of users and other information of a personal nature, which might constitute personally identifiable information or protected health information under any such applicable laws. Do not publish or otherwise disclose to any other person or entity any information that constitutes personally identifiable information or protected health information under any such applicable laws derived from the Data through manual or automated techniques. 

Please acknowledge the source of the Data in any publications or presentations reporting use of this Data. 


Robert S. Gray, David Kotz, Calvin Newport, Nikita Dubrovsky, Aaron Fiske, Jason Liu, Christopher Masone, Susan McGrath, Yougu Yuan, dartmouth/outdoor, , Date: 20061106

Dataset Files


    File dartmouth-outdoor-readme.txt1.67 KB

    These datasets are part of Community Resource for Archiving Wireless Data (CRAWDAD). CRAWDAD began in 2004 at Dartmouth College as a place to share wireless network data with the research community. Its purpose was to enable access to data from real networks and real mobile users at a time when collecting such data was challenging and expensive. The archive has continued to grow since its inception, and starting in summer 2022 is being housed on IEEE DataPort.

    Questions about CRAWDAD? See our CRAWDAD FAQ. Interested in submitting your dataset to the CRAWDAD collection? Get started, by submitting an Open Access Dataset.