Page MenuHome Accel-ppp
Feed All Stories

Dec 16 2021

micron added a comment to T52: PPPoE: per-interface service-name.

a one more
may be need to add one-two line

Dec 16 2021, 11:58 · accel-ppp 1.x

Dec 15 2021

Dimka88 created T55: No buffer space available.
Dec 15 2021, 12:19 · accel-ppp 1.x

Dec 14 2021

Dimka88 closed T28: Исправить документацию в разделе radius as Resolved.
Dec 14 2021, 18:07

Dec 12 2021

Dimka88 renamed T27: Wrong filling radattr files [ppp-compat] from недопис в [ppp-compat] to Wrong filling radattr files [ppp-compat].
Dec 12 2021, 17:46 · accel-ppp 1.x
Dimka88 closed T27: Wrong filling radattr files [ppp-compat] as Resolved.

Looks like already fixed, tested on 1.12.0-155-gda51911

Dec 12 2021, 17:45 · accel-ppp 1.x
Dimka88 added a comment to T21: Add ipset/nft sets support.

As it usable feature for multiple connection types, I propose to move them to a separate section or movie it to [common] section.

Dec 12 2021, 16:44 · accel-ppp 1.x
Dimka88 renamed T21: Add ipset/nft sets support from Add ipset/nff sets support to Add ipset/nft sets support.
Dec 12 2021, 16:42 · accel-ppp 1.x
Dimka88 changed the status of T39: 1.12.0-107 не плановый перезапуск при работе IPoE + QinQ from Open to Needs testing.
Dec 12 2021, 16:41 · accel-ppp 1.x
Dimka88 added a comment to T39: 1.12.0-107 не плановый перезапуск при работе IPoE + QinQ .

Hello @_Maks_ ,I think it should be already fixed, try to install new version from master branch.

Dec 12 2021, 16:40 · accel-ppp 1.x

Dec 11 2021

micron added a comment to T45: per interface radius configuration.

Hi @amindomao
vlan_mon not work with multiple accel instances, may be need to fix .

Dec 11 2021, 00:52 · accel-ppp 1.x
Dimka88 added a comment to T54: accel-cmd display wrong counters for sessions in start state.

Patch https://github.com/DmitriyEshenko/accel-ppp/commit/ef13d4b39a49a55dda3035f6c2447691d506acd6
I going to create PR after VRF patch merging.

Dec 11 2021, 00:24 · accel-ppp 1.x
Dimka88 changed the status of T54: accel-cmd display wrong counters for sessions in start state from Confirmed to In progress.
Dec 11 2021, 00:07 · accel-ppp 1.x
Dimka88 changed the status of T54: accel-cmd display wrong counters for sessions in start state from Open to Confirmed.
Dec 11 2021, 00:07 · accel-ppp 1.x
Dimka88 closed T40: Show version of running accel-pppd from cli or telnet as Resolved.
Dec 11 2021, 00:00 · accel-ppp 1.x
Dimka88 closed T42: Add possibility to build accel-ppp Debian 11 package as Resolved.
Dec 11 2021, 00:00 · accel-ppp 1.x

Dec 10 2021

Dimka88 closed T53: accel-cmd shutdown does not work properly as Resolved.
Dec 10 2021, 21:43 · accel-ppp 1.x
Dimka88 renamed T53: accel-cmd shutdown does not work properly from Shutdown soft does not work properly to accel-cmd shutdown does not work properly.
Dec 10 2021, 19:54 · accel-ppp 1.x
Dimka88 claimed T53: accel-cmd shutdown does not work properly.
Dec 10 2021, 19:42 · accel-ppp 1.x
Dimka88 changed the status of T53: accel-cmd shutdown does not work properly from Open to Confirmed.
Dec 10 2021, 19:41 · accel-ppp 1.x

Nov 26 2021

Dimka88 closed T43: Segmentation fault (fail_log) as Resolved.
Nov 26 2021, 09:18 · accel-ppp 1.x

Nov 24 2021

Dimka88 created T52: PPPoE: per-interface service-name.
Nov 24 2021, 08:48 · accel-ppp 1.x

Nov 15 2021

vajinadaraltma updated vajinadaraltma.
Nov 15 2021, 07:22

Nov 14 2021

svlobanov added a comment to T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.

please enable max debug level and provide debug log retaled to the session from the beginning to the end

Nov 14 2021, 21:43 · accel-ppp 1.x
Dimka88 changed the status of T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update from Open to Confirmed.
Nov 14 2021, 12:49 · accel-ppp 1.x
disappointed renamed T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update from Нулевые счётчики Acct-* в Stop пакете при разрыве PPPoE сессии по LCP таймауту до прихода первого Interim-Update to При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 14 2021, 12:49 · accel-ppp 1.x
Dimka88 updated subscribers of T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.

Reply from @svlobanov in accel-ppp telegram chat

Nov 14 2021, 12:49 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 14 2021, 12:30 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 14 2021, 12:20 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 14 2021, 12:14 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 14 2021, 12:13 · accel-ppp 1.x

Nov 13 2021

disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 13 2021, 23:11 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 13 2021, 23:11 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 13 2021, 23:10 · accel-ppp 1.x
disappointed updated the task description for T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update.
Nov 13 2021, 23:10 · accel-ppp 1.x
disappointed triaged T51: При завершении PPPoE сессии по LCP таймауту данные в счётчиках Acct-* всегда равны тому, что было в последнем Interim-Update as Normal priority.
Nov 13 2021, 23:05 · accel-ppp 1.x

Nov 3 2021

svlobanov added a comment to T33: Текущий config запущенного accel.

Proposed solution doesn't solve described issue because 'show loaded-config' prints last loaded config disregarding what is applied. Even if accel doesn't apply an option from loaded config, it will be printed by 'show loaded-config' command

Nov 3 2021, 11:45 · accel-ppp 1.x
antonmayko added a comment to T33: Текущий config запущенного accel.

Добрый день, Дмитрий. Мы это обсуждали в группе в телеграме. В процессе настройки ацеля есть параметры, которые надо менять "на лету" через reload. Затем, чтобы понять есть ли эффект надо ждать денек-другой. Так может продолжаться и неделю и две. И когда одни и те же параметры меняешь по нескольку раз без перезапуска ацеля (через reload). Невозможно всё упомнить. А еще не все параметры доступны для изменения "на лету". Вот и получается что удобно было бы посмотреть с какими параметрами работает ацель на данный момент.
Меня в чате телеграма поддержали и другие ребята на счет этой идеи и Вы дали мне сюда доступ, чтоб создать задачу.
P.S. до сих пор не все знают наизусть параметры, которые меняются на лету, а какие - нет. А с этой фичей вопросы исчезают.

Nov 3 2021, 08:21 · accel-ppp 1.x

Nov 2 2021

Dimka88 assigned T33: Текущий config запущенного accel to svlobanov.

It looks working. But I'm not sure that this will really useful thing. @antonmayko could you describe why you need this?

Nov 2 2021, 18:06 · accel-ppp 1.x
svlobanov added a comment to T33: Текущий config запущенного accel.

please try this branch https://github.com/svlobanov/accel-ppp/tree/show-loaded-config

Nov 2 2021, 09:54 · accel-ppp 1.x

Oct 29 2021

Dimka88 closed T50: Некорректная работа proxy-arp as Resolved.
Oct 29 2021, 21:31 · accel-ppp 1.x
Dimka88 closed T49: PAP/CHAP Per vlan as Invalid.
Oct 29 2021, 21:29 · accel-ppp 1.x

Oct 20 2021

svlobanov updated subscribers of T50: Некорректная работа proxy-arp.

@themiron please review PR https://github.com/accel-ppp/accel-ppp/pull/24

Oct 20 2021, 19:07 · accel-ppp 1.x
ProLan added a comment to T50: Некорректная работа proxy-arp.

@svlobanov - patch in production, working good

Oct 20 2021, 08:20 · accel-ppp 1.x

Oct 18 2021

svlobanov added a comment to T50: Некорректная работа proxy-arp.

@ProLan please try to apply patch below:

Oct 18 2021, 23:19 · accel-ppp 1.x
Dimka88 changed the status of T50: Некорректная работа proxy-arp from Open to Confirmed.
Oct 18 2021, 22:49 · accel-ppp 1.x
svlobanov added a comment to T50: Некорректная работа proxy-arp.

I can confirm that linux kernel learns from arp.src.hw_mac, not from eth.src. It means that current behaviour for proxy_arp=2 is useless. If no l2 isolation, then proxy_arp is not required. in case of isolation, linux clients will not be able not communicate to each other

Oct 18 2021, 22:47 · accel-ppp 1.x
ProLan created T50: Некорректная работа proxy-arp.
Oct 18 2021, 19:40 · accel-ppp 1.x

Oct 12 2021

micron added a comment to T49: PAP/CHAP Per vlan.

you are right

Oct 12 2021, 17:45 · accel-ppp 1.x
Dimka88 added a comment to T49: PAP/CHAP Per vlan.

Why do you need so strange things?
Try to use couple of daemons with different configured interfaces.

Oct 12 2021, 17:33 · accel-ppp 1.x
micron created T49: PAP/CHAP Per vlan.
Oct 12 2021, 15:13 · accel-ppp 1.x

Oct 4 2021

lazyxero triaged T48: stack-buffer-underflow in reload_exec as High priority.
Oct 4 2021, 14:01 · accel-ppp 1.x
micron updated subscribers of T47: Segmentation fault with latest master.
Oct 4 2021, 12:24 · accel-ppp 1.x

Oct 3 2021

micron claimed T47: Segmentation fault with latest master.

i send tcpdump cap to @Dimka88 on time of problem.

Oct 3 2021, 21:46 · accel-ppp 1.x

Oct 2 2021

micron triaged T47: Segmentation fault with latest master as High priority.
Oct 2 2021, 16:57 · accel-ppp 1.x

Sep 21 2021

Dimka88 closed T46: Поломалось секция логов per-user-dir as Resolved.
Sep 21 2021, 08:02 · accel-ppp 1.x

Sep 4 2021

Dimka88 claimed T46: Поломалось секция логов per-user-dir.
Sep 4 2021, 01:45 · accel-ppp 1.x
Dimka88 changed the status of T46: Поломалось секция логов per-user-dir from Open to Needs testing.

@makselpr , please try this patch https://github.com/DmitriyEshenko/accel-ppp/commit/2117fb22a60973e773ccf97c17cd2b48e78f6d76
If all ok we can add it to the master branch

Sep 4 2021, 01:44 · accel-ppp 1.x

Aug 26 2021

makselpr renamed T46: Поломалось секция логов per-user-dir from Поломалось серция логов per-user-dir to Поломалось секция логов per-user-dir.
Aug 26 2021, 12:31 · accel-ppp 1.x
makselpr triaged T46: Поломалось секция логов per-user-dir as Normal priority.
Aug 26 2021, 12:30 · accel-ppp 1.x

Jul 9 2021

amindomao added a comment to T45: per interface radius configuration.

It might be or it will be? :)

Jul 9 2021, 17:24 · accel-ppp 1.x
svlobanov added a comment to T45: per interface radius configuration.

It might be an issue with vlan_mon, but ipoe module will work with multiple accel instances

Jul 9 2021, 17:13 · accel-ppp 1.x
amindomao added a comment to T45: per interface radius configuration.

How accel-ppp will communicate with kernel modules like vlan_mon or ipoe? Will multiple instances use the same modules?

Jul 9 2021, 17:03 · accel-ppp 1.x
Dimka88 added a comment to T45: per interface radius configuration.

Agree with @svlobanov , run multiple instances in this case.

Jul 9 2021, 16:30 · accel-ppp 1.x
svlobanov added a comment to T45: per interface radius configuration.

you can simply run multiple instances of accel-ppp

Jul 9 2021, 16:01 · accel-ppp 1.x
amindomao created T45: per interface radius configuration.
Jul 9 2021, 15:57 · accel-ppp 1.x

Jul 8 2021

pilotsanya updated the task description for T44: debian 11 mode=L3 ipoe_nl_cmd_modify.
Jul 8 2021, 13:29 · accel-ppp 1.x
pilotsanya updated the task description for T44: debian 11 mode=L3 ipoe_nl_cmd_modify.
Jul 8 2021, 13:28 · accel-ppp 1.x
pilotsanya created T44: debian 11 mode=L3 ipoe_nl_cmd_modify.
Jul 8 2021, 13:23 · accel-ppp 1.x

Jun 30 2021

Dimka88 changed the status of T43: Segmentation fault (fail_log) from In progress to Needs testing.

PR https://github.com/accel-ppp/accel-ppp/pull/20

Jun 30 2021, 21:38 · accel-ppp 1.x
Dimka88 claimed T43: Segmentation fault (fail_log).
Jun 30 2021, 00:17 · accel-ppp 1.x
Dimka88 changed the status of T43: Segmentation fault (fail_log) from Open to In progress.
Jun 30 2021, 00:17 · accel-ppp 1.x

Jun 29 2021

Dimka88 changed the status of T42: Add possibility to build accel-ppp Debian 11 package from Open to In progress.
Jun 29 2021, 23:56 · accel-ppp 1.x
Dimka88 created T42: Add possibility to build accel-ppp Debian 11 package.
Jun 29 2021, 23:56 · accel-ppp 1.x
Dimka88 closed T23: падение accel-pppd 1.12.0-72-ged7b287 при одновременном поднятии 2.5к ipoe пользователей as Resolved.
Jun 29 2021, 23:51 · accel-ppp 1.x

Jun 16 2021

Dimka88 closed T41: IPoE address range shifting as Resolved.

Fixed https://github.com/accel-ppp/accel-ppp/commit/7f90e753fcc63b6581e8ca96ea2da693a8de495c

Jun 16 2021, 11:04 · accel-ppp 1.x

Jun 11 2021

Dimka88 triaged T41: IPoE address range shifting as Normal priority.
Jun 11 2021, 19:25 · accel-ppp 1.x

May 8 2021

Dimka88 closed T15: DHCP NAK as Resolved.

Successfully tested. Fixed by https://github.com/accel-ppp/accel-ppp/commit/b1ca6157c6fcd93966e115f113a032a42f77843d

May 8 2021, 11:42 · accel-ppp 1.x

Apr 28 2021

evgen added a comment to T40: Show version of running accel-pppd from cli or telnet.

Created pull request
https://github.com/accel-ppp/accel-ppp/pull/15

Apr 28 2021, 18:20 · accel-ppp 1.x

Apr 27 2021

Dimka88 changed the status of T40: Show version of running accel-pppd from cli or telnet from Open to In progress.
Apr 27 2021, 13:27 · accel-ppp 1.x
evgen created T40: Show version of running accel-pppd from cli or telnet.
Apr 27 2021, 13:22 · accel-ppp 1.x

Mar 22 2021

anphsw added a comment to T38: Падение accel-cmd reload без изменения конфига.
==16746== Helgrind, a thread error detector
==16746== Copyright (C) 2007-2017, and GNU GPL'd, by OpenWorks LLP et al.
==16746== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==16746== Command: /tmp/debug/usr/sbin/accel-pppd -c /etc/accel-ppp/accel-ppp.conf
==16746== 
***** started
***** start connecting pppoe
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #6 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484976D: create_thread (triton.c:320)
==16746==    by 0x484ABC3: triton_run (triton.c:744)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #7 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484716B: md_run (md.c:48)
==16746==    by 0x484AC55: triton_run (triton.c:757)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #1 is the program's root thread
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x4851260 was first observed
==16746==    at 0x4830619: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484A87C: triton_init (triton.c:666)
==16746==    by 0x1384E2: main (main.c:360)
==16746==  Address 0x4851260 is 0 bytes inside data symbol "threads_lock"
==16746== 
==16746== Possible data race during write of size 4 at 0x4D497F8 by thread #6
==16746== Locks held: 1, at address 0x4851260
==16746==    at 0x4849271: triton_thread (triton.c:201)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #7
==16746== Locks held: none
==16746==    at 0x4847455: md_thread (md.c:98)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d497f8 is 56 bytes inside a block of size 156 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x48499D8: triton_context_register (triton.c:364)
==16746==    by 0x5111F08: init_net (disc.c:95)
==16746==    by 0x5112198: pppoe_disc_start (disc.c:149)
==16746==    by 0x510D2DB: __pppoe_server_start (pppoe.c:1528)
==16746==    by 0x510CC67: pppoe_server_start (pppoe.c:1406)
==16746==    by 0x510F0A2: load_interfaces (pppoe.c:2075)
==16746==    by 0x510F225: pppoe_init (pppoe.c:2107)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==  Block was alloc'd by thread #1
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during read of size 4 at 0x4D6E2E0 by thread #7
==16746== Locks held: none
==16746==    at 0x484734D: md_thread (md.c:82)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x483886B: memset (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4847588: triton_md_register_handler (md.c:119)
==16746==    by 0x1120CF: establish_ppp (ppp.c:124)
==16746==    by 0x5109650: connect_channel (pppoe.c:463)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Address 0x4d6e2e0 is 80 bytes inside a block of size 92 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x4847573: triton_md_register_handler (md.c:118)
==16746==    by 0x1120CF: establish_ppp (ppp.c:124)
==16746==    by 0x5109650: connect_channel (pppoe.c:463)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==  Block was alloc'd by thread #6
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during read of size 4 at 0x4D6E2C4 by thread #7
==16746== Locks held: none
==16746==    at 0x484735F: md_thread (md.c:84)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x4838863: memset (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4847588: triton_md_register_handler (md.c:119)
==16746==    by 0x1120CF: establish_ppp (ppp.c:124)
==16746==    by 0x5109650: connect_channel (pppoe.c:463)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Address 0x4d6e2c4 is 52 bytes inside a block of size 92 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x4847573: triton_md_register_handler (md.c:118)
==16746==    by 0x1120CF: establish_ppp (ppp.c:124)
==16746==    by 0x5109650: connect_channel (pppoe.c:463)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==  Block was alloc'd by thread #6
==16746== 
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #5 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484976D: create_thread (triton.c:320)
==16746==    by 0x484ABC3: triton_run (triton.c:744)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x4D4AE5C was first observed
==16746==    at 0x4834228: pthread_mutex_init (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x513BE14: __add_server (serv.c:555)
==16746==    by 0x513CC3C: add_server (serv.c:862)
==16746==    by 0x513CE6B: load_config (serv.c:906)
==16746==    by 0x513D11C: init (serv.c:955)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==    by 0x1385B8: main (main.c:402)
==16746==  Address 0x4d4ae5c is 156 bytes inside a block of size 268 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x513CBB7: add_server (serv.c:847)
==16746==    by 0x513CE6B: load_config (serv.c:906)
==16746==    by 0x513D11C: init (serv.c:955)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==    by 0x1385B8: main (main.c:402)
==16746==  Block was alloc'd by thread #1
==16746== 
==16746== Possible data race during write of size 4 at 0x4D4A518 by thread #5
==16746== Locks held: 1, at address 0x4D4AE5C
==16746==    at 0x1361DC: __list_del (list.h:86)
==16746==    by 0x13622B: list_del (list.h:96)
==16746==    by 0x136D36: _log_free_msg (log.c:276)
==16746==    by 0x1365CF: do_log (log.c:113)
==16746==    by 0x136B2F: log_ppp_debug (log.c:230)
==16746==    by 0x513AD82: rad_server_req_exit (serv.c:259)
==16746==    by 0x5134512: rad_req_read (req.c:423)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x5102D91: unpack_msg (log_syslog.c:42)
==16746==    by 0x5102FE1: do_syslog (log_syslog.c:81)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d4a518 is 56 bytes inside a block of size 76 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x1363EB: do_log (log.c:87)
==16746==    by 0x136B2F: log_ppp_debug (log.c:230)
==16746==    by 0x513AD82: rad_server_req_exit (serv.c:259)
==16746==    by 0x5134512: rad_req_read (req.c:423)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Block was alloc'd by thread #5
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x4D4AE5C was first observed
==16746==    at 0x4834228: pthread_mutex_init (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x513BE14: __add_server (serv.c:555)
==16746==    by 0x513CC3C: add_server (serv.c:862)
==16746==    by 0x513CE6B: load_config (serv.c:906)
==16746==    by 0x513D11C: init (serv.c:955)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==    by 0x1385B8: main (main.c:402)
==16746==  Address 0x4d4ae5c is 156 bytes inside a block of size 268 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x513CBB7: add_server (serv.c:847)
==16746==    by 0x513CE6B: load_config (serv.c:906)
==16746==    by 0x513D11C: init (serv.c:955)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==    by 0x1385B8: main (main.c:402)
==16746==  Block was alloc'd by thread #1
==16746== 
==16746== Possible data race during write of size 4 at 0x4D4A24C by thread #5
==16746== Locks held: 1, at address 0x4D4AE5C
==16746==    at 0x136232: list_del (list.h:97)
==16746==    by 0x136D36: _log_free_msg (log.c:276)
==16746==    by 0x1365CF: do_log (log.c:113)
==16746==    by 0x136B2F: log_ppp_debug (log.c:230)
==16746==    by 0x513AD82: rad_server_req_exit (serv.c:259)
==16746==    by 0x5134512: rad_req_read (req.c:423)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x5102D99: unpack_msg (log_syslog.c:42)
==16746==    by 0x5102FE1: do_syslog (log_syslog.c:81)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d4a24c is 36 bytes inside a block of size 185 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x136FF6: add_msg (log.c:332)
==16746==    by 0x1364C7: do_log (log.c:96)
==16746==    by 0x136B2F: log_ppp_debug (log.c:230)
==16746==    by 0x513AD82: rad_server_req_exit (serv.c:259)
==16746==    by 0x5134512: rad_req_read (req.c:423)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Block was alloc'd by thread #5
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during write of size 4 at 0x4D4A24C by thread #5
==16746== Locks held: none
==16746==    at 0x136232: list_del (list.h:97)
==16746==    by 0x136D36: _log_free_msg (log.c:276)
==16746==    by 0x1365CF: do_log (log.c:113)
==16746==    by 0x136A3D: log_ppp_info1 (log.c:210)
==16746==    by 0x51358DF: rad_packet_print (packet.c:412)
==16746==    by 0x5134563: rad_req_read (req.c:429)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x5102D99: unpack_msg (log_syslog.c:42)
==16746==    by 0x5102FE1: do_syslog (log_syslog.c:81)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d4a24c is 36 bytes inside a block of size 185 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x136FF6: add_msg (log.c:332)
==16746==    by 0x1364C7: do_log (log.c:96)
==16746==    by 0x136A3D: log_ppp_info1 (log.c:210)
==16746==    by 0x5134543: rad_req_read (req.c:428)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Block was alloc'd by thread #5
==16746== 
***** pppoe connected
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #8 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4847F7E: timer_run (timer.c:57)
==16746==    by 0x484AC5A: triton_run (triton.c:758)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during read of size 4 at 0x4D73F4C by thread #8
==16746== Locks held: none
==16746==    at 0x484815F: timer_thread (timer.c:91)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x4838893: memset (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4848374: triton_timer_add (timer.c:129)
==16746==    by 0x117B9B: start_echo (ppp_lcp.c:669)
==16746==    by 0x116234: lcp_layer_up (ppp_lcp.c:170)
==16746==    by 0x114A2C: ppp_fsm_recv_conf_req_ack (ppp_fsm.c:232)
==16746==    by 0x1180FA: lcp_recv (ppp_lcp.c:776)
==16746==    by 0x112F65: ppp_chan_read (ppp.c:424)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==  Address 0x4d73f4c is 76 bytes inside a block of size 88 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x484835F: triton_timer_add (timer.c:127)
==16746==    by 0x117B9B: start_echo (ppp_lcp.c:669)
==16746==    by 0x116234: lcp_layer_up (ppp_lcp.c:170)
==16746==    by 0x114A2C: ppp_fsm_recv_conf_req_ack (ppp_fsm.c:232)
==16746==    by 0x1180FA: lcp_recv (ppp_lcp.c:776)
==16746==    by 0x112F65: ppp_chan_read (ppp.c:424)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==  Block was alloc'd by thread #6
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during read of size 4 at 0x4D73F40 by thread #8
==16746== Locks held: none
==16746==    at 0x4848171: timer_thread (timer.c:93)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #6
==16746== Locks held: none
==16746==    at 0x483886B: memset (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4848374: triton_timer_add (timer.c:129)
==16746==    by 0x117B9B: start_echo (ppp_lcp.c:669)
==16746==    by 0x116234: lcp_layer_up (ppp_lcp.c:170)
==16746==    by 0x114A2C: ppp_fsm_recv_conf_req_ack (ppp_fsm.c:232)
==16746==    by 0x1180FA: lcp_recv (ppp_lcp.c:776)
==16746==    by 0x112F65: ppp_chan_read (ppp.c:424)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==  Address 0x4d73f40 is 64 bytes inside a block of size 88 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x484835F: triton_timer_add (timer.c:127)
==16746==    by 0x117B9B: start_echo (ppp_lcp.c:669)
==16746==    by 0x116234: lcp_layer_up (ppp_lcp.c:170)
==16746==    by 0x114A2C: ppp_fsm_recv_conf_req_ack (ppp_fsm.c:232)
==16746==    by 0x1180FA: lcp_recv (ppp_lcp.c:776)
==16746==    by 0x112F65: ppp_chan_read (ppp.c:424)
==16746==    by 0x4849484: ctx_thread (triton.c:252)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==  Block was alloc'd by thread #6
==16746== 
***** accel-cmd show sessions
***** accel-cmd reload
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #2 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x114227: init (ppp.c:768)
==16746==    by 0x484A9EA: triton_load_modules (triton.c:704)
==16746==    by 0x1385B8: main (main.c:402)
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x14686C was first observed
==16746==    at 0x4831F07: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x112BAD: uc_thread (ppp.c:337)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x14686c is 0 bytes inside data symbol "uc_lock"
==16746== 
==16746== Possible data race during write of size 4 at 0x146B1C by thread #5
==16746== Locks held: none
==16746==    at 0x114139: load_config (ppp.c:749)
==16746==    by 0x484D452: triton_event_fire (event.c:103)
==16746==    by 0x1274EE: conf_reload_notify (std_cmd.c:319)
==16746==    by 0x4848B79: __config_reload (triton.c:73)
==16746==    by 0x4849095: triton_thread (triton.c:159)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #2
==16746== Locks held: 1, at address 0x14686C
==16746==    at 0x112BB7: uc_thread (ppp.c:338)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x146b1c is 0 bytes inside data symbol "conf_unit_cache"
==16746== 
***** disconnecting pppoe
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #3 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484976D: create_thread (triton.c:320)
==16746==    by 0x484ABC3: triton_run (triton.c:744)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x4851260 was first observed
==16746==    at 0x4830619: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484A87C: triton_init (triton.c:666)
==16746==    by 0x1384E2: main (main.c:360)
==16746==  Address 0x4851260 is 0 bytes inside data symbol "threads_lock"
==16746== 
==16746==  Lock at 0x4D4C4A4 was first observed
==16746==    at 0x4830619: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x4849A41: triton_context_register (triton.c:376)
==16746==    by 0x510941B: allocate_channel (pppoe.c:420)
==16746==    by 0x510C2F6: pppoe_recv_PADR (pppoe.c:1219)
==16746==    by 0x510C5B4: pppoe_serv_read (pppoe.c:1268)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d4c4a4 is 52 bytes inside a block of size 156 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x48499D8: triton_context_register (triton.c:364)
==16746==    by 0x510941B: allocate_channel (pppoe.c:420)
==16746==    by 0x510C2F6: pppoe_recv_PADR (pppoe.c:1219)
==16746==    by 0x510C5B4: pppoe_serv_read (pppoe.c:1268)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Block was alloc'd by thread #6
==16746== 
==16746== Possible data race during write of size 4 at 0x4D4C4A8 by thread #7
==16746== Locks held: 2, at addresses 0x4851260 0x4D4C4A4
==16746==    at 0x48498B8: triton_queue_ctx (triton.c:347)
==16746==    by 0x4847409: md_thread (md.c:91)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #3
==16746== Locks held: 1, at address 0x4851260
==16746==    at 0x4849271: triton_thread (triton.c:201)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x4d4c4a8 is 56 bytes inside a block of size 156 alloc'd
==16746==    at 0x482BBB9: malloc (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x10CF6E: _md_malloc (memdebug.c:47)
==16746==    by 0x10D096: md_malloc (memdebug.c:68)
==16746==    by 0x484CC89: md_mempool_alloc (mempool.c:201)
==16746==    by 0x48499D8: triton_context_register (triton.c:364)
==16746==    by 0x510941B: allocate_channel (pppoe.c:420)
==16746==    by 0x510C2F6: pppoe_recv_PADR (pppoe.c:1219)
==16746==    by 0x510C5B4: pppoe_serv_read (pppoe.c:1268)
==16746==    by 0x4849544: ctx_thread (triton.c:273)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==  Block was alloc'd by thread #6
==16746== 
***** pppoe disconnected
***** stopping
^C==16746== ----------------------------------------------------------------
==16746== 
==16746== Possible data race during read of size 4 at 0x146970 by thread #1
==16746== Locks held: none
==16746==    at 0x136706: log_info1 (log.c:139)
==16746==    by 0x13880C: main (main.c:448)
==16746== 
==16746== This conflicts with a previous write of size 4 by thread #5
==16746== Locks held: none
==16746==    at 0x13741B: load_config (log.c:497)
==16746==    by 0x484D452: triton_event_fire (event.c:103)
==16746==    by 0x1274EE: conf_reload_notify (std_cmd.c:319)
==16746==    by 0x4848B79: __config_reload (triton.c:73)
==16746==    by 0x4849095: triton_thread (triton.c:159)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==  Address 0x146970 is 0 bytes inside data symbol "log_level"
==16746== 
==16746== ---Thread-Announcement------------------------------------------
==16746== 
==16746== Thread #4 was created
==16746==    at 0x4C0D915: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==    by 0x486129D: create_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4862AED: pthread_create@@GLIBC_2.1 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x48633B6: pthread_create@GLIBC_2.0 (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4833DA0: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484976D: create_thread (triton.c:320)
==16746==    by 0x484ABC3: triton_run (triton.c:744)
==16746==    by 0x1385E0: main (main.c:407)
==16746== 
==16746== ----------------------------------------------------------------
==16746== 
==16746==  Lock at 0x48512A0 was first observed
==16746==    at 0x4830619: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x484A890: triton_init (triton.c:667)
==16746==    by 0x1384E2: main (main.c:360)
==16746==  Address 0x48512a0 is 0 bytes inside data symbol "ctx_list_lock"
==16746== 
==16746== Possible data race during write of size 4 at 0x48512A4 by thread #4
==16746== Locks held: 1, at address 0x48512A0
==16746==    at 0x4849D28: triton_context_unregister (triton.c:442)
==16746==    by 0x510D680: pppoe_server_free (pppoe.c:1600)
==16746==    by 0x510C6B0: pppoe_serv_close (pppoe.c:1291)
==16746==    by 0x48495E1: ctx_thread (triton.c:287)
==16746==    by 0x484921E: triton_thread (triton.c:192)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746== 
==16746== This conflicts with a previous read of size 4 by thread #5
==16746== Locks held: none
==16746==    at 0x48490AD: triton_thread (triton.c:163)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x482F40F: ??? (in /usr/lib/valgrind/vgpreload_helgrind-x86-linux.so)
==16746==    by 0x486241A: start_thread (in /tmp/debug/lib/libpthread-2.22.so)
==16746==    by 0x4C0D92D: clone (in /tmp/debug/lib/libc-2.22.so)
==16746==  Address 0x48512a4 is 0 bytes inside data symbol "terminate"
==16746== 
==16746== 
==16746== Use --history-level=approx or =none to gain increased speed, at
==16746== the cost of reduced accuracy of conflicting-access information
==16746== For lists of detected and suppressed errors, rerun with: -s
==16746== ERROR SUMMARY: 36 errors from 12 contexts (suppressed: 2536 from 398)
Mar 22 2021, 13:08 · accel-ppp 1.x

Mar 18 2021

_Maks_ created T39: 1.12.0-107 не плановый перезапуск при работе IPoE + QinQ .
Mar 18 2021, 11:48 · accel-ppp 1.x

Mar 17 2021

georgebody updated georgebody.
Mar 17 2021, 16:24
georgebody updated georgebody.
Mar 17 2021, 16:24

Mar 16 2021

anphsw triaged T38: Падение accel-cmd reload без изменения конфига as High priority.
Mar 16 2021, 09:42 · accel-ppp 1.x
Dimka88 added a comment to T23: падение accel-pppd 1.12.0-72-ged7b287 при одновременном поднятии 2.5к ipoe пользователей.

@triar is this issue fixed?

Mar 16 2021, 09:16 · accel-ppp 1.x

Jan 24 2021

Dimka88 changed the status of T15: DHCP NAK from Open to In progress.

Added option 56 and 54
https://github.com/accel-ppp/accel-ppp/commit/49ef6cf969f662c44f4be2b82b101273c8c6de71
https://github.com/accel-ppp/accel-ppp/commit/588965eaf3fa90531482c5bcf1c145bce0e9a169

Jan 24 2021, 13:15 · accel-ppp 1.x

Jan 13 2021

ev created T37: Добавить логи информацию по ipset.
Jan 13 2021, 23:04 · accel-ppp 1.x
ev created T36: Балансировка по весам на 3 и более серверов..
Jan 13 2021, 22:58 · accel-ppp 1.x

Jan 8 2021

slima created T35: Accel Segmentation fault when mass pppoe user req disconnects and ip-down script is defined.
Jan 8 2021, 19:18 · accel-ppp 1.x

Nov 18 2020

antonmayko created T34: Расширеное описание команды reload.
Nov 18 2020, 16:50 · accel-ppp 1.x
antonmayko created T33: Текущий config запущенного accel.
Nov 18 2020, 16:41 · accel-ppp 1.x

Oct 28 2020

svlobanov added a comment to T32: Accel-ppp does not inform RADIUS server about delegation prefix.

Your accel-pppd version is too old (142c943721615020bca80de4c69e6bbf574529aa = Mon Oct 22 12:00:02 2018 +0200)

Oct 28 2020, 22:42 · accel-ppp 1.x

Oct 27 2020

umka created T32: Accel-ppp does not inform RADIUS server about delegation prefix.
Oct 27 2020, 11:02 · accel-ppp 1.x

Oct 26 2020

hashbang added a comment to T3: ip-pool reload implementation.

+1

Oct 26 2020, 10:11
ProLan created T31: Некорректные парсинг interfaces.
Oct 26 2020, 08:54 · accel-ppp 1.x

Oct 21 2020

shumbor closed T29: Падение в telnet.c когда поврежден (с мусором - нули) history файл as Resolved.

Патч залит в основной код.

Oct 21 2020, 11:57 · accel-ppp 1.x
triar created T30: падение accel-pppd 1.12.0-92-g38b6104 спустя 48 дней стабильно работы.
Oct 21 2020, 07:01 · accel-ppp 1.x

Oct 15 2020

shumbor triaged T29: Падение в telnet.c когда поврежден (с мусором - нули) history файл as Normal priority.
Oct 15 2020, 11:14 · accel-ppp 1.x
umka triaged T28: Исправить документацию в разделе radius as High priority.
Oct 15 2020, 09:32