Skip to content

Commit 2f3dc4c

Browse files
author
Martin
committed
Restructuring of transport protocol code and examples for simplicity. Inclusion of TP class into utils.py
1 parent 37e75d4 commit 2f3dc4c

File tree

6 files changed

+283
-308
lines changed

6 files changed

+283
-308
lines changed
Binary file not shown.
Binary file not shown.

examples/data-processing/README.md

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ Download this folder, enter it, open your command prompt and run below:
1616
- `process_data.py`: List log files between dates, DBC decode them and perform various processing
1717
- `process_tp_data.py`: Example of how multiframe data can be handled incl. DBC decoding (Transport Protocol)
1818
- `utils.py`: Functions/classes used in the above scripts (note: Identical to utils.py from the dashboard-writer repo)
19-
- `utils_tp.py`: Functions/classes used for Transport Protocol handling
2019

2120
---
2221

@@ -30,15 +29,31 @@ If you're using AWS S3, your endpoint would e.g. be `https://s3.us-east-2.amazon
3029
---
3130

3231
### Regarding Transport Protocol example
33-
The example in `process_tp_data.py` should be seen as a very simplistic WIP TP implementation. It can be used as a starting point and will most likely need to be modified for individual use cases. We of course welcome any questions/feedback on this functionality.
32+
The example in `process_tp_data.py` should be seen as a very simplistic TP implementation. It can be used as a starting point and will most likely need to be modified for individual use cases. We of course welcome any questions/feedback on this functionality.
3433

3534
The basic concept works as follows:
3635

37-
1. You specify a list of 'response IDs', which are the CAN IDs with multiframe responses
38-
2. The raw data is filtered by the response IDs and the payloads of these frames are combined
36+
1. You specify the "type" of transport protocol: UDS (`uds`), J1939 (`j1939`) or NMEA 2000 Fast Packets (`nmea`)
37+
2. The raw data is filtered by the protocol-specific 'TP response IDs' and the payloads of these frames are combined
3938
3. The original response frames are then replaced by these re-constructed frames with payloads >8 bytes
40-
4. You can modify how the first/consequtive frames are interpreted (see the UDS and J1939 examples)
41-
5. The re-constructed data can be decoded using DBC files, optionally using multiplexing as in the sample UDS DBC files
39+
4. The re-constructed data can be decoded using DBC files, optionally using multiplexing as in the sample UDS DBC files
40+
41+
#### Implementing TP processing in other scripts
42+
To use the Transport Protocol functionality in other scripts, you need to make minor modifications:
43+
44+
1. Ensure that you import the `MultiFrameDecoder` class from `utils.py`
45+
2. Specify the type via the `tp_type` variable and ensure you include this in the `extract_phys` function
46+
47+
See below example:
48+
49+
```
50+
tp_type = "j1939"
51+
df_raw, device_id = proc.get_raw_data(log_file)
52+
tp = MultiFrameDecoder(tp_type)
53+
df_raw = tp.combine_tp_frames(df_raw)
54+
df_phys = proc.extract_phys(df_raw, tp_type=tp_type)
55+
```
56+
4257

4358
#### UDS example
4459
For UDS basics see the [Wikipedia article](https://en.wikipedia.org/wiki/Unified_Diagnostic_Services). The UDS example for device `17BD1DB7` shows UDS response data from a Hyunda Kona EV.
@@ -79,6 +94,3 @@ A UDS DBC file can use extended multiplexing to decode UDS signals, utilizing th
7994

8095
The script merges the reconstructed UDS frames into the original data (removing the original entries of the response ID). The result is a new raw dataframe that can be processed as you would normally do (using a suitable DBC file). The above example has an associated DBC file, `tp_uds_hyundai_soc.dbc`, which lets you extract e.g. State of Charge.
8196

82-
-----
83-
### Pending improvements
84-
- Improve the UDS/J1939 scripts to alternatively trigger the creation of a new combined frame once the payload exceeds the data length
Lines changed: 25 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,15 @@
1-
import mdf_iter, canedge_browser, can_decoder, os
2-
import pandas as pd
3-
from datetime import datetime, timezone
4-
from utils import setup_fs, load_dbc_files, ProcessData
5-
from utils_tp import MultiFrameDecoder, nmea_fast_packet_pgns
1+
import canedge_browser, os
2+
from utils import setup_fs, load_dbc_files, ProcessData, MultiFrameDecoder
63

7-
# ---------------------------------------------------
8-
# initialize DBC converter and file loader
9-
def process_tp_example(devices, dbc_path, res_id_list_hex, tp_type):
4+
5+
def process_tp_example(devices, dbc_path, tp_type):
106
fs = setup_fs(s3=False)
117
db_list = load_dbc_files(dbc_paths)
128
log_files = canedge_browser.get_log_files(fs, devices)
139

1410
proc = ProcessData(fs, db_list)
1511

1612
for log_file in log_files:
17-
# create output folder
1813
output_folder = "output" + log_file.replace(".MF4", "")
1914
if not os.path.exists(output_folder):
2015
os.makedirs(f"{output_folder}")
@@ -23,11 +18,11 @@ def process_tp_example(devices, dbc_path, res_id_list_hex, tp_type):
2318
df_raw.to_csv(f"{output_folder}/tp_raw_data.csv")
2419

2520
# replace transport protocol sequences with single frames
26-
tp = MultiFrameDecoder(df_raw, res_id_list_hex)
27-
df_raw = tp.combine_tp_frames_by_type(tp_type)
21+
tp = MultiFrameDecoder(tp_type)
22+
df_raw = tp.combine_tp_frames(df_raw)
2823
df_raw.to_csv(f"{output_folder}/tp_raw_data_combined.csv")
2924

30-
# extract physical values as normal
25+
# extract physical values as normal, but add tp_type
3126
df_phys = proc.extract_phys(df_raw, tp_type=tp_type)
3227
df_phys.to_csv(f"{output_folder}/tp_physical_values.csv")
3328

@@ -37,37 +32,27 @@ def process_tp_example(devices, dbc_path, res_id_list_hex, tp_type):
3732
# ----------------------------------------
3833
# run different TP examples
3934

40-
# # basic UDS example with multiple UDS PIDs on same CAN ID, e.g. 221100, 221101
41-
# devices = ["LOG_TP/0D2C6546"]
42-
# dbc_paths = [r"dbc_files/tp_uds_test.dbc"]
43-
# res_id_list_hex = ["0x7E9"]
44-
#
45-
# process_tp_example(devices, dbc_paths, res_id_list_hex, "uds")
46-
#
47-
# # UDS data from Hyundai Kona EV (SoC%)
35+
# UDS data from Hyundai Kona EV (SoC%)
4836
devices = ["LOG_TP/17BD1DB7"]
4937
dbc_paths = [r"dbc_files/tp_uds_hyundai_soc.dbc"]
50-
res_id_list_hex = ["0x7EC", "0x7BB"]
38+
process_tp_example(devices, dbc_paths, "uds")
5139

52-
process_tp_example(devices, dbc_paths, res_id_list_hex, "uds")
40+
# J1939 TP data
41+
devices = ["LOG_TP/FCBF0606"]
42+
dbc_paths = [r"dbc_files/tp_j1939.dbc"]
43+
process_tp_example(devices, dbc_paths, "j1939")
5344

54-
# # J1939 TP data
55-
# devices = ["LOG_TP/FCBF0606"]
56-
# res_id_list_hex = ["0x1CEBFF00"]
57-
# dbc_paths = [r"dbc_files/tp_j1939.dbc"]
58-
#
59-
# process_tp_example(devices, dbc_paths, res_id_list_hex, "j1939")
45+
# NMEA 2000 fast packet data (with GNSS position)
46+
devices = ["LOG_TP/94C49784"]
47+
dbc_paths = [r"dbc_files/tp_nmea_2.dbc"]
48+
process_tp_example(devices, dbc_paths, "nmea")
6049

6150
# UDS data across two CAN channels
62-
# devices = ["LOG_TP/FE34E37D"]
63-
# dbc_paths = [r"dbc_files/tp_uds_test.dbc"]
64-
# res_id_list_hex = ["0x7EA"]
65-
#
66-
# process_tp_example(devices, dbc_paths, res_id_list_hex, "uds")
67-
68-
# NMEA 2000 TP data (with GNSS position)
69-
# devices = ["LOG_TP/94C49784"]
70-
# res_id_list_hex = nmea_fast_packet_pgns
71-
# dbc_paths = [r"dbc_files/tp_nmea_2.dbc"]
72-
#
73-
# process_tp_example(devices, dbc_paths, res_id_list_hex, "nmea")
51+
devices = ["LOG_TP/FE34E37D"]
52+
dbc_paths = [r"dbc_files/tp_uds_test.dbc"]
53+
process_tp_example(devices, dbc_paths, "uds")
54+
55+
# UDS example with multiple UDS PIDs on same CAN ID, e.g. 221100, 221101
56+
devices = ["LOG_TP/0D2C6546"]
57+
dbc_paths = [r"dbc_files/tp_uds_test.dbc"]
58+
process_tp_example(devices, dbc_paths, "uds")

examples/data-processing/utils.py

Lines changed: 237 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -208,3 +208,240 @@ def print_log_summary(self, device_id, log_file, df_phys):
208208
"\n---------------",
209209
f"\nDevice: {device_id} | Log file: {log_file.split(device_id)[-1]} [Extracted {len(df_phys)} decoded frames]\nPeriod: {df_phys.index.min()} - {df_phys.index.max()}\n",
210210
)
211+
212+
213+
# -----------------------------------------------
214+
class MultiFrameDecoder:
215+
"""BETA class for handling transport protocol data. For each response ID, identify
216+
sequences of subsequent frames and combine the relevant parts of the data payloads
217+
into a single payload with the response ID as the ID. The original raw dataframe is
218+
then cleansed of the original response ID sequence frames. Instead, the new concatenated
219+
frames are inserted. Further, the class supports DBC decoding of the resulting modified raw data
220+
221+
:param tp_type: the class supports UDS ("uds"), NMEA 2000 Fast Packets ("nmea") and J1939 ("j1939")
222+
:param df_raw: dataframe of raw CAN data from the mdf_iter module
223+
224+
SINGLE_FRAME_MASK: mask used in matching single frames
225+
FIRST_FRAME_MASK: mask used in matching first frames
226+
CONSEQ_FRAME_MASK: mask used in matching consequtive frames
227+
SINGLE_FRAME: frame type reflecting a single frame response
228+
FIRST_FRAME: frame type reflecting the first frame in a multi frame response
229+
CONSEQ_FRAME: frame type reflecting a consequtive frame in a multi frame response
230+
ff_payload_start: the combined payload will start at this byte in the FIRST_FRAME
231+
bam_pgn_hex: this is used in J1939 and marks the initial BAM message ID in HEX
232+
res_id_list_hex: TP 'response CAN IDs' to process. For nmea/j1939, these are provided by default
233+
234+
"""
235+
236+
def __init__(self, tp_type=""):
237+
frame_struct_uds = {
238+
"SINGLE_FRAME_MASK": 0xF0,
239+
"FIRST_FRAME_MASK": 0xF0,
240+
"CONSEQ_FRAME_MASK": 0xF0,
241+
"SINGLE_FRAME": 0x00,
242+
"FIRST_FRAME": 0x10,
243+
"CONSEQ_FRAME": 0x20,
244+
"ff_payload_start": 2,
245+
"bam_pgn_hex": "",
246+
"res_id_list_hex": ["0x7E0", "0x7E9", "0x7EA", "0x7EB", "0x7EC", "0x7ED", "0x7EE", "0x7EF", "0x7EA", "0x7BB"],
247+
}
248+
249+
frame_struct_j1939 = {
250+
"SINGLE_FRAME_MASK": 0xFF,
251+
"FIRST_FRAME_MASK": 0xFF,
252+
"CONSEQ_FRAME_MASK": 0x00,
253+
"SINGLE_FRAME": 0xFF,
254+
"FIRST_FRAME": 0x20,
255+
"CONSEQ_FRAME": 0x00,
256+
"ff_payload_start": 8,
257+
"bam_pgn_hex": "0xEC00",
258+
"res_id_list_hex": ["0xEB00"],
259+
}
260+
261+
frame_struct_nmea = {
262+
"SINGLE_FRAME_MASK": 0xFF,
263+
"FIRST_FRAME_MASK": 0x0F,
264+
"CONSEQ_FRAME_MASK": 0x00,
265+
"SINGLE_FRAME": 0xFF,
266+
"FIRST_FRAME": 0x00,
267+
"CONSEQ_FRAME": 0x00,
268+
"ff_payload_start": 2,
269+
"bam_pgn_hex": "",
270+
"res_id_list_hex": [
271+
"0xfed8",
272+
"0x1f007",
273+
"0x1f008",
274+
"0x1f009",
275+
"0x1f014",
276+
"0x1f016",
277+
"0x1f101",
278+
"0x1f105",
279+
"0x1f201",
280+
"0x1f208",
281+
"0x1f209",
282+
"0x1f20a",
283+
"0x1f20c",
284+
"0x1f20f",
285+
"0x1f210",
286+
"0x1f212",
287+
"0x1f513",
288+
"0x1f805",
289+
"0x1f80e",
290+
"0x1f80f",
291+
"0x1f810",
292+
"0x1f811",
293+
"0x1f814",
294+
"0x1f815",
295+
"0x1f904",
296+
"0x1f905",
297+
"0x1fa04",
298+
"0x1fb02",
299+
"0x1fb03",
300+
"0x1fb04",
301+
"0x1fb05",
302+
"0x1fb11",
303+
"0x1fb12",
304+
"0x1fd10",
305+
"0x1fe07",
306+
"0x1fe12",
307+
"0x1ff14",
308+
"0x1ff15",
309+
],
310+
}
311+
312+
if tp_type == "uds":
313+
self.frame_struct = frame_struct_uds
314+
elif tp_type == "j1939":
315+
self.frame_struct = frame_struct_j1939
316+
elif tp_type == "nmea":
317+
self.frame_struct = frame_struct_nmea
318+
else:
319+
self.frame_struct = {}
320+
321+
self.tp_type = tp_type
322+
323+
return
324+
325+
def calculate_pgn(self, frame_id):
326+
pgn = (frame_id & 0x03FFFF00) >> 8
327+
328+
pgn_f = (pgn & 0xFF00) >> 8
329+
pgn_s = pgn & 0x00FF
330+
331+
if pgn_f < 240:
332+
pgn &= 0xFFFFFF00
333+
334+
return pgn
335+
336+
def construct_new_tp_frame(self, base_frame, payload_concatenated, can_id):
337+
new_frame = base_frame
338+
new_frame.at["DataBytes"] = payload_concatenated
339+
new_frame.at["DLC"] = 0
340+
new_frame.at["DataLength"] = len(payload_concatenated)
341+
342+
if can_id:
343+
new_frame.at["ID"] = can_id
344+
345+
return new_frame
346+
347+
def combine_tp_frames(self, df_raw):
348+
import pandas as pd
349+
import sys
350+
351+
bam_pgn_hex = self.frame_struct["bam_pgn_hex"]
352+
res_id_list = [int(res_id, 16) for res_id in self.frame_struct["res_id_list_hex"]]
353+
354+
df_raw_combined = pd.DataFrame()
355+
356+
# use PGN matching for J1939 and NMEA
357+
if self.tp_type == "nmea" or self.tp_type == "j1939":
358+
df_raw_excl_tp = df_raw[~df_raw["ID"].apply(self.calculate_pgn).isin(res_id_list)]
359+
else:
360+
df_raw_excl_tp = df_raw[~df_raw["ID"].isin(res_id_list)]
361+
362+
df_raw_combined = df_raw_excl_tp
363+
364+
for channel, df_raw_channel in df_raw.groupby("BusChannel"):
365+
for res_id in res_id_list:
366+
# filter raw data for response ID and extract a 'base frame'
367+
if bam_pgn_hex == "":
368+
bam_pgn = 0
369+
else:
370+
bam_pgn = int(bam_pgn_hex, 16)
371+
372+
if self.tp_type == "nmea" or self.tp_type == "j1939":
373+
df_raw_filter = df_raw_channel[df_raw_channel["ID"].apply(self.calculate_pgn).isin([res_id, bam_pgn])]
374+
else:
375+
df_raw_filter = df_raw_channel[df_raw_channel["ID"].isin([res_id])]
376+
377+
if df_raw_filter.empty:
378+
continue
379+
380+
base_frame = df_raw_filter.iloc[0]
381+
382+
frame_list = []
383+
frame_timestamp_list = []
384+
payload_concatenated = []
385+
ff_length = 0xFFF
386+
can_id = None
387+
conseq_frame_prev = None
388+
389+
# iterate through rows in filtered dataframe
390+
for index, row in df_raw_filter.iterrows():
391+
payload = row["DataBytes"]
392+
first_byte = payload[0]
393+
row_id = row["ID"]
394+
row_pgn = self.calculate_pgn(row_id)
395+
396+
# check if first frame (either for UDS/NMEA or J1939 case)
397+
first_frame_test = (
398+
(first_byte & self.frame_struct["FIRST_FRAME_MASK"] == self.frame_struct["FIRST_FRAME"])
399+
& (bam_pgn_hex == "")
400+
) or (self.tp_type == "j1939" and bam_pgn == row_pgn)
401+
402+
# if single frame, save frame directly (excl. 1st byte)
403+
if first_byte & self.frame_struct["SINGLE_FRAME_MASK"] == self.frame_struct["SINGLE_FRAME"]:
404+
new_frame = self.construct_new_tp_frame(base_frame, payload, row_id)
405+
frame_list.append(new_frame.values.tolist())
406+
frame_timestamp_list.append(index)
407+
408+
# if first frame, save info from prior multi frame response sequence,
409+
# then initialize a new sequence incl. the first frame payload
410+
elif first_frame_test:
411+
# create a new frame using information from previous iterations
412+
if len(payload_concatenated) >= ff_length:
413+
new_frame = self.construct_new_tp_frame(base_frame, payload_concatenated, can_id)
414+
415+
frame_list.append(new_frame.values.tolist())
416+
frame_timestamp_list.append(frame_timestamp)
417+
418+
# reset and start on next frame
419+
payload_concatenated = []
420+
conseq_frame_prev = None
421+
frame_timestamp = index
422+
423+
# for J1939, extract PGN and convert to 29 bit CAN ID for use in baseframe
424+
if self.tp_type == "j1939":
425+
pgn_hex = "".join("{:02x}".format(x) for x in reversed(payload[5:8]))
426+
pgn = int(pgn_hex, 16)
427+
can_id = (6 << 26) | (pgn << 8) | 254
428+
429+
ff_length = (payload[0] & 0x0F) << 8 | payload[1]
430+
431+
for byte in payload[self.frame_struct["ff_payload_start"] :]:
432+
payload_concatenated.append(byte)
433+
434+
# if consequtive frame, extend payload with payload excl. 1st byte
435+
elif first_byte & self.frame_struct["CONSEQ_FRAME_MASK"] == self.frame_struct["CONSEQ_FRAME"]:
436+
if (conseq_frame_prev == None) or ((first_byte - conseq_frame_prev) == 1):
437+
conseq_frame_prev = first_byte
438+
for byte in payload[1:]:
439+
payload_concatenated.append(byte)
440+
441+
df_raw_tp = pd.DataFrame(frame_list, columns=base_frame.index, index=frame_timestamp_list)
442+
df_raw_combined = df_raw_combined.append(df_raw_tp)
443+
444+
df_raw_combined.index.name = "TimeStamp"
445+
df_raw_combined = df_raw_combined.sort_index()
446+
447+
return df_raw_combined

0 commit comments

Comments
 (0)