This file contains the Supplemental Information cited in:

R. Rafie Borujeny and F. R. Kschischang, A Signal-Space Distance Measure for Nondispersive Optical Fibersubmitted to IEEE Transactions on Information Theory.

This file is a header file written in C containing the numerical values as well as the function that calculates the approximation of the adversarial distance.


This header file contains a routine that can be used to computed 

the adversarial distance between two complex points according to 

the formulation given in 



      title={A Signal-Space Distance Measure for Nondispersive Optical Fiber}, 

      author={Reza Rafie Borujeny and Frank R. Kschischang},







The above manuscript is available for download at:


The distance calculations are performed based on the approximation 

given in the above manuscript. For the normalized evolution equation


   q'(z) = i * |q(z)| ^ 2 * q(z) + n(z), 0 < z < 1,


for two complex numbers z1 and z2, with polar coordinates 


   z1 = r1 * exp(i * t1),

   z2 = r2 * exp(i * t2),


the approximation of the adversarial distance is given by 


   d(z1, z2) = (1 / 4) * (r1 - r2) ^ 2 + 

               a(r1, r2) * sin^2(( t2 - t1 - \phi(r1, r2) ) / 2).


The function a(r1, r2) is approximated using numerical methods, 

as described in the reference manuscript, and the corresponding 

values are given below in the array a_fit. The function distance 

below calculates the adversarial distance between two points. For

use with the unnormalized equation


   q'(z) = i * GAMMA * |q(z)| ^ 2 * q(z) + n(z), 0 < z < L,


you should change the assigned values to L and GAMMA in the 

associated macros.




This file contains the following:


1- a macro named L, which represents the length of the 

   fiber in [km]

2- a macro named GAMMA, which represents the nonlinearity 

   coefficient in [1/W/km]

3- a macro INDEX(i,j), which converts 2-D index to 1-D

   indexing to access the array a_fit (to be described later).

4- an array of doubles X, which contains the 31 grid points along the

   x-axis. This grid points are the same along the y-axis as well. That

   is, for any x in X, and any y in X, we have a corresponding 

   value a(x, y) in the array a_fit.

5- a double DX, which is equal to the increment between the grid points 

   in the x-axis, i.e., DX = X[1] - X[0]

6- a double DA, which is equal to the area of a surface area element,

   i.e., DA = (X[1] - X[0]) * (X[1] - X[0])

7- a function distance, which takes two complex numbers in polar 

   coordinates and calculates the adversarial distance between them. This

   function uses bi-linear approximation to estimate A(x,y) from the 

   existing numerical values given in a_fit.


This data set contains packet captures (PCAPs) of a 5G campus network.

The corresponding paper can be found at 5G Campus Networks: A First Measurement Study


The instructions are within the READMEs(.md/.pdf).

In addition the source code of this project is available on Github:


This is the MATLAB .fig file from the paper "Massive User Transmission Enabled Combinatorial Based Non-Orthogonal Multiple Access Wireless Networks With Theoretical Analysis"


A promising technique to realize augmented reality on future light-weight glasses is to offload computationally extensive rendering tasks to the cloud. This however places considerable demands on the network as well as the air interface with respect to latency, reliability and throughput. For evaluation of these architectures and for traffic modelling, a dataset is provided, which contains realistic payloads of cloud-rendered augmented reality in form of video files.


Provided are the raw video files after rendering with a resolution of 7200x6360 pixels. For low-latency encoding libx264 ffmpeg version 4.2.4 was used with flags -preset ultrafast -tune zerolatency at a target bitrate of 8Mbit/s. The streaming is taking place with ffmpeg and the custom nut output muxer. The resulting packetized output is sent to a UDP port on localhost. The encoded video files as well as the captured traffic traces are provided.  


In this paper, we propose a novel resource management scheme that jointly allocates the transmitpower and computational resources in a centralized radio access network architecture. The networkcomprises a set of computing nodes to which the requested tasks of different users are offloaded. Theoptimization problem minimizes the energy consumption of task offloading while takes the end-to-end latency, i.e., the transmission, execution, and propagation latencies of each task, into account.


Notes on the simulation files:

DTO.m simulates the disjoint task offloading (DTO) method in the manuscript. This file receives the following parameters as its inputs:

1.   Number of single-antenna users, which is equal to the number of tasks

2.   Maximum acceptable latency of tasks

3.   Ratio of RAN latency to the maximum acceptable latency

4.   Computational load of each task

5.   Data size of each task

After receiving the parameters, DTO.m executes the disjoint method and returns the outputs as in the following:

1.      Acceptance Ratio

2.   Radio Transmission latency of all tasks

3.   Propagation latency of all tasks

4.   Execution latency of all tasks


JTO.m simulates the joint task offloading (JTO) method in the manuscript. This file receives the following parameters as its inputs:

1.      Number of single-antenna users, which is equal to the number of tasks.

2.   Maximum acceptable latency of tasks

3.   Computational load of each task

4.   Data size of each task.

After receiving the parameters, JTO.m executes the disjoint method and returns the outputs as in the following:

1.      Acceptance Ratio

2.   Radio Transmission latency of all tasks

3.   Propagation latency of all tasks

4.   Execution latency of all tasks.


This repository contains the results of 30 public Internet browsing experiments, from a computer at the campus network of the Public University of Navarre, out of which 20 used plaintext HTTP browsing, while 10 used HTTPS. We present both the original data sources in the form of network packet traces and HAR waterfalls, as well as the processed results formatted as line-based text files.


Each experiment consisted of a Selenium-automated web browser (Google Chrome 80.0) visiting a set of predefined web sites, with all caching options disabled. Both network packet traffic traces and in-browser measurements were collected. The network measurements were collected using tcpdump running at the client, while in-browser measurements were collected through the HAR Export Trigger extension. We have uploaded both sets of files.

The sets of websites for HTTP and HTTPS experiments are different, as modern web sites usually support HTTPS but not HTTP. The HTTPS set was obtained by collecting the top 2000 web sites from the Alexa Top Ranking. The HTTP set is the subset of these top 2000 websites, those that supported plain-text HTTP. To extend the amount of measurements of plain HTTP traffic, each of these websites was crawled, following the embedded ‘http://’ links.

For each web resource requested by the browser, we computed the time elapsed between the HTTP request being sent and the response being fully received; this is referred to as the resource's response time. Each response time obtained, along with the URL for that resource, and the timestamp at which the request was made, is referred to as a sample. These samples are obtained from the browser measurements and from network traffic. For the HTTPS experiments, the network data was decrypted using the ephemeral per-session encryption keys generated by the web browser. The files containing these keys have also been uploaded.

A number of resources are requested more than once during each test, such as cascade style sheets or images. Although we deactivated the cache, the browser still sometimes reported some resources as requested with a false response time of zero, since the request is never issued to the server but obtained from a cache. Also, a small number of requests trigger an exception in the browser, which prevents data being collected at the client side, although the request and response are present in the network traffic. These behaviours complicate one-to-one comparisons between network and in-browser measurements because a different number of response times for a specific resource may be found in the network traffic and in the browser report. We exported to text files only the first response time seen for each resource with a unique URL. This filtering removes false measurements reported by the browser. In case this filtering is not desired, all the data can be obtained from the pcap and HAR files uploaded.

The dataset contains the original PCAP and HAR files, and also the post-processed files obtained from them. The raw data is contained in the and files, while the post-processed files are contained in the file. Inside the archive there are two directories, corresponding to the HTTP and HTTPS experiments respectively.

Both raw data archives contain files named X.pcap and subdirectories named X_har (with X being the name of each individual experiment), corresponding to the data gathered from network traces and in-browser measurements respectively. Inside each X_har directory, a .har file is stored for each visited site with the full download waterfall. Additionally, decryption keys for the HTTPS experiments are provided, under the name of X.key.

The archive contains three files for each experiment, amounting to a total of 60 and 30 files for HTTP and HTTPS respectively.

The three files describing each experiment contain line-based text data, and are named X_network_tresp.txt, X_browser_tresp.txt and X_conn_info.txt, with X being the name of each individual experiment. The first two files contain, on each line, space-separated fields describing a single request-response sample. X_network_tresp.txt contains the information gathered from network traces, while X_browser_tresp.txt was obtained from browser instrumentation. On the other hand, X_conn_info.txt contains, on each line, space-separated fields related to each TCP connection present during the experiment, obtained through network traces.

The connections in X_conn_info.txt and the samples in X_network_tresp.txt are associated through a unique connection ID field present in each line in both files. Note that this is a one-to-many relationship, meaning that a connection ID is associated to a single TCP stream (i.e. line in X_conn_info.txt), but one or more samples (i.e. lines in X_network_tresp.txt).

We describe below the line format for each file. This information is included as well in the "format.txt" file, located on the top level directory of the compressed archive.


Connection ID

RTT (milliseconds)

Number of retransmissions

Number of sequence holes

Number of data packets, client to server

Number of data packets, server to client


Request timestamp (seconds)

Response time (seconds)

Requested URL

Response size (bytes)

Connection ID


Request timestamp (seconds)

Response time (seconds)

Requested URL


In the simulation experiment, OPNET simulation software was used to simulate the Space-Integrated Ground Information Intelligent Network environment to obtain the data. A satellite network composed of six satellites was used to simulate the data flow formed by the ground communication traffic through this network. The simulated time was one week and the traffic data information of a node was randomly collected for analysis and learning.


In this paper, the security-aware robust resource allocation in energy harvesting cognitive radio networks is considered with cooperation between two transmitters while there are uncertainties in channel gains and battery energy value. To be specific, the primary access point harvests energy from the green resource and uses time switching protocol to send the energy and data towards the secondary access point (SAP).


This dataset is devoted to 1-perfect codes. Currently, it is mainly focused on the concatenated ternary perfect codes, but there is also an additional content, see (3+) below, (1-2) This dataset contains all inequivalent concatenated ternary 1-perfect codes of length 13. Additionally, it contains some components necessary to obtain such concatenated codes, namely, collections of disjoint ternary distance-3 Reed-Muller-like codes of length 9, see p.(1) below.


(1) Each text file in the dataset archived in keeps collections of disjoint $(9,3^6,3)_3$ codes, coded in the following manner (the file names are coded with equivalence classes of the corresponding codes, from 0 to 3; some files are empty, for example, "n00123", because two codes from equivalence class 0, one code from class 1, one code from class 2, and one code from class 3 cannot be packed in a disjoint manner). Each line corresponds to one representative of such a collection. If the number of codes is M, then the line contains M records like "N T PPPPPPPPP". "N" denotes the number of a code in the list of seven permutably inequivalent codes (note that "permutably equivalent" is not the same as "equivalent"; only first 4 are inequivalent in the sense of $Aut(H(9,3))$, the last 3 are equivaleng to the code number 3). "T" is the number of a translation vector in the list 0: 000000000, 1: 120000000, 2: 102000000, 3: 100200000, 4: 100020000, 5: 100002000, 6: 100000200, 7: 100000020, 8: 100000002, and "PPPPPPPPP" is a coordinate permutation. To reconstruct the corresponding code, one should take the code number "N" from, apply the coordinate permutation "PPPPPPPPP" in the following manner: $$(x_0,x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8) \to (x_{P^{-1}(0)},x_{P^{-1}(1)},x_{P^{-1}(2)},x_{P^{-1}(3)},x_{P^{-1}(4)},x_{P^{-1}(5)},x_{P^{-1}(6)},x_{P^{-1}(7)},x_{P^{-1}(8)}),$$ and add the translation vector number "T" to all codewords.  After M records about the codes, each line contains: (ii) the record "-A", where A is the order of the automorphism group of the collection of codes; (iii) the record "+R", where 6+R is the rank of the collection, that is, the dimension of the union of the (non-translated) codes (the minimum rank is 6 and the maximum is 8=9-1 because of the all-parity check, so R is 0, 1, ro 2); (iv) the record "~" or "|", where "~" means that the collection can be continued to a collection of M+1 codes and "|" means that no $(9,3^6,3)_3$ code (satisfying the all-parity check) can be added to the complement (this can only happen when M is 6 or 9).

(2) The file "concat13" (arxived to concat13.xz; the original file is 1.7GiB) contains the list of all inequivalent concatenated ternary 1-perfect codes of length 13 in the following form. Each length-9 partition in the form described in p.(1) (the order of records is (i), (iii), (ii), and the last record contains generators of the group of permutations of 9 codes induced by the automorphism group of the partition) is followed by the list of concatenated codes obtained with this partitions. The records for each code are the following:

(i) "A" or "B" indicate the partition of $H(4,3)$ into 1-perfect codes.

(ii) The next 8 numbers show the permutation $p$ of the codes of the partition.

(iii) The record "+R" means that the rank of the code is 10+R (10, 11, or 12).

(iv) The record ":K" means that the dimension of the kernel of the code is K (the kernel is the set of all periods of the code, that is, the translations that send the code to itself).

(v) The record "/S" means that the code can be represented as a concatenated code in S different ways (partitions 9+4 of coordinates); for the rank-10 Hamming code this number is 13; for rank-12 codes it is 1; for rank-11 codes it is from 1 to 4.

(vi) The record "-A*K" means that the order of the automorphism group of the code is A*K, while A is the number of authomorphisms that keep the coordinate partition 9+4, i.e., the current concatenation structure (if (iv) is "/1", then K necessarily equals 1).

If the record (v) is "/1" or "/13", then there are no more records, because this guarantees that an equivalent code cannot be obtained by concatenation in another way (in particular, from a different partition).

If the record (v) is "/2", "/3", or "/4", then such codes were checked for isomorphism, and one more record is given:

(vii) This record contains a symbol "n" or "o" and some number. "n" means that the code is new, and it is not equivalent to any of the codes above, and the following number is its unique number (among the codes with symbol "n" in the record). "o" means that the code is not new and it is equivalent to one of the codes above, namely, the one with symbol "n" followed by the same number.

So (ATTENTION!!!), some lines of the database correspond to equivalent codes; to read only inequivalent codes, one should ignore the lines with symbol "o".

Additional material:

(3) The file contains the database file (to reduce the size, the xz-compression was applied) with 64864800 check matrices of [15,11,3] different Hamming codes, the database file with the 232 inequivalent pairs of disjoint Hamming codes, the file readme.txt with instructions and some scripts.

(4) The file contains examples of ternary 1-perfect codes of length 13 (i.e.,  [13,59049,3]_3 codes), rank 13 (i.e., full-rank), and kernel dimension from 3 to 7. Each code is kept in a separate file where the codewords are listed in the ternary-vector form.


The dataset corresponds the measurement that was implemented on the control accuracy of a mixed reality application for digital twin based crane.