Search results for: arbitrary triangular-z node
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 667

Search results for: arbitrary triangular-z node

157 Unveiling the Indonesian Identity through Proverbial Expressions: The Relation of Meaning between Authority and Globalization

Authors: Prima Gusti Yanti, Fairul Zabadi

Abstract:

The purpose of the study is to find out relation of moral massage with the authority ang globalization in proverb. Proverb is one of the many forms of cultural identity of the Indonesian/Malay people fulled with moral values. The values contained within those proverbs are beneficial not only to the society, but also to those who held power amidst on this era of globalization. The method being used is qualitative research by using content analysis which is done by describing and uncovering the forms and meanings of proverbs used within Indonesia Minangkabau society. Sources for this study’s data were extracted from a Minangkabau native speaker in the subdistrict of Tanah Abang, Jakarta. Said sources were retrieved through a series of interviews with the Minangkabau native speaker, whose speech is still adorned with idiomatic expressions. The research findings show that there existed 30 proverbs or idiomatic expressions in the Minangkabau language that are often used by its indigenous people. The thirty data contain moral values that are closely interwoven with the matter of power and globalization. Analytical results show that there are fourteen moral values contained within proverbs reflect a firm connection between rule and power in globalization; such as: responsible, brave, togetherness and consensus,tolerance, politeness, thorough and meticulous,honest and keeping promise, ingenious and learning, care, self-correction, be fair, alert, arbitrary, self-awareness. Structurally, proverbs possess an unchangeably formal construction; symbolically, proverbs possess meanings that are clearly decided through ethnographic communicative factors along with situational and cultural contexts. Values contained within proverbs may be used as a guide in social management, be it between fellow men, men between nature, or even men between their Creator. Therefore, the meanings and values contained within the morals of proverbs could also be utilized as a counsel for those who rule and in charge of power in order to stem the tides of globalization that had already spread into sectoral, territorial and educational continuums.

Keywords: continuum, globalization, identity, proverb, rule-power

Procedia PDF Downloads 376
156 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 281
155 A Highly Efficient Broadcast Algorithm for Computer Networks

Authors: Ganesh Nandakumaran, Mehmet Karaata

Abstract:

A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.

Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms

Procedia PDF Downloads 485
154 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 303
153 Efficient Mercury Sorbent: Activated Carbon and Metal Organic Framework Hybrid

Authors: Yongseok Hong, Kurt Louis Solis

Abstract:

In the present study, a hybrid sorbent using the metal organic framework (MOF), UiO-66, and powdered activated carbon (pAC) is synthesized to remove cationic and anionic metals simultaneously. UiO-66 is an octahedron-shaped MOF with a Zr₆O₄(OH)₄ metal node and 1,4-benzene dicarboxylic acid (BDC) organic linker. Zr-based MOFs are attractive for trace element remediation in wastewaters, because Zr is relatively non-toxic as compared to other classes of MOF and, therefore, it will not cause secondary pollution. Most remediation studies with UiO-66 target anions such as fluoride, but trace element oxyanions such as arsenic, selenium, and antimony have also been investigated. There have also been studies involving mercury removal by UiO-66 derivatives, however these require post-synthetic modifications or have lower effective surface areas. Activated carbon is known for being a readily available, well-studied, effective adsorbent for metal contaminants. Solvothermal method was employed to prepare hybrid sorbent from UiO66 and activated carbon, which could be used to remove mercury and selenium simultaneously. The hybrid sorbent was characterized using FSEM-EDS, FT-IR, XRD, and TGA. The results showed that UiO66 and activated carbon are successfully composited. From BET studies, the hybrid sorbent has a SBET of 1051 m² g⁻¹. Adsorption studies were performed, where the hybrid showed maximum adsorption of 204.63 mg g⁻¹ and 168 mg g⁻¹ for Hg (II) and selenite, respectively, and follows the Langmuir model for both species. Kinetics studies have revealed that the Hg uptake of the hybrid is pseudo-2nd order and has rate constant of 5.6E-05 g mg⁻¹ min⁻¹ and the selenite uptake follows the simplified Elovich model with α = 2.99 mg g⁻¹ min⁻¹, β = 0.032 g mg⁻¹.

Keywords: adsorption, flue gas wastewater, mercury, selenite, metal organic framework

Procedia PDF Downloads 161
152 Diagnostic Accuracy Of Core Biopsy In Patients Presenting With Axillary Lymphadenopathy And Suspected Non-Breast Malignancy

Authors: Monisha Edirisooriya, Wilma Jack, Dominique Twelves, Jennifer Royds, Fiona Scott, Nicola Mason, Arran Turnbull, J. Michael Dixon

Abstract:

Introduction: Excision biopsy has been the investigation of choice for patients presenting with pathological axillary lymphadenopathy without a breast abnormality. Core biopsy of nodes can provide sufficient tissue for diagnosis and has advantages in terms of morbidity and speed of diagnosis. This study evaluates the diagnostic accuracy of core biopsy in patients presenting with axillary lymphadenopathy. Methods: Between 2009 and 2019, 165 patients referred to the Edinburgh Breast Unit had a total of 179 axillary lymph node core biopsies. Results: 152 (92%) of the 165 initial core biopsies were deemed to contain adequate nodal tissue. Core biopsy correctly established malignancy in 75 of the 78 patients with haematological malignancy (96%) and in all 28 patients with metastatic carcinoma (100%) and correctly diagnosed benign changes in 49 of 57 (86%) patients with benign conditions. There were no false positives and no false negatives. In 67 (85.9%) of the 78 patients with hematological malignancy, there was sufficient material in the first core biopsy to allow the pathologist to make an actionable diagnosis and not ask for more tissue sampling prior to treatment. There were no complications of core biopsy. On follow up, none of the patients with benign cores has been shown to have malignancy in the axilla and none with lymphoma had their initial disease incorrectly classified. Conclusions: This study shows that core biopsy is now the investigation of choice for patients presenting with axillary lymphadenopathy even in those suspected as having lymphoma.

Keywords: core biopsy, excision biopsy, axillary lymphadenopathy, non-breast malignancy

Procedia PDF Downloads 225
151 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 129
150 Investigation of Influence of Maize Stover Components and Urea Treatment on Dry Matter Digestibility and Fermentation Kinetics Using in vitro Gas Techniques

Authors: Anon Paserakung, Chaloemphon Muangyen, Suban Foiklang, Yanin Opatpatanakit

Abstract:

Improving nutritive values and digestibility of maize stover is an alternative way to increase their utilization in ruminant and reduce air pollution from open burning of maize stover in the northern Thailand. The present study, 2x3 factorial arrangements in completely randomized design was conducted to investigate the effect of maize stover components (whole and upper stover; cut above 5th node). Urea treatment at levels 0, 3, and 6% DM on dry matter digestibility and fermentation kinetics of maize stover using in vitro gas production. After 21 days of urea treatment, results illustrated that there was no interaction between maize stover components and urea treatment on 48h in vitro dry matter digestibility (IVDMD). IVDMD was unaffected by maize stover components (P > 0.05), average IVDMD was 55%. However, using whole maize stover gave higher cumulative gas and gas kinetic parameters than those of upper stover (P<0.05). Treating maize stover by ensiling with urea resulted in a significant linear increase in IVDMD (P<0.05). IVDMD increased from 42.6% to 53.9% when increased urea concentration from 0 to 3% and maximum IVDMD (65.1%) was observed when maize stover was ensiled with 6% urea. Maize stover treated with urea at levels of 0, 3, and 6% linearly increased cumulative gas production at 96h (31.1 vs 50.5 and 59.1 ml, respectively) and all gas kinetic parameters excepted the gas production from the immediately soluble fraction (P<0.50). The results indicate that maize stover treated with 6% urea enhance in vitro dry matter digestibility and fermentation kinetics. This study provides a practical approach to increasing utilization of maize stover in feeding ruminant animals.

Keywords: maize stover, urea treatment, ruminant feed, gas production

Procedia PDF Downloads 204
149 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 96
148 Robotic Lingulectomy for Primary Lung Cancer: A Video Presentation

Authors: Abraham J. Rizkalla, Joanne F. Irons, Christopher Q. Cao

Abstract:

Purpose: Lobectomy was considered the standard of care for early-stage non-small lung cancer (NSCLC) after the Lung Cancer Study Group trial demonstrated increased locoregional recurrence for sublobar resections. However, there has been heightened interest in segmentectomies for selected patients with peripheral lesions ≤2cm, as investigated by the JCOG0802 and CALGB140503 trials. Minimally invasive robotic surgery facilitates segmentectomies with improved maneuverability and visualization of intersegmental planes using indocyanine green. We hereby present a patient who underwent robotic lingulectomy for an undiagnosed ground-glass opacity. Methodology: This video demonstrates a robotic portal lingulectomy using three 8mm ports and a 12mm port. Stereoscopic direct vision facilitated the identification of the lingula artery and vein, and intra-operative bronchoscopy was performed to confirm the lingula bronchus. The intersegmental plane was identified by indocyanine green and a near-infrared camera. Thorough lymph node sampling was performed in accordance with international standards. Results: The 18mm lesion was successfully excised with clear margins to achieve R0 resection with no evidence of malignancy in the 8 lymph nodes sampled. Histopathological examination revealed lepidic predominant adenocarcinoma, pathological stage IA. Conclusion: This video presentation exemplifies the standard approach for robotic portal lingulectomy in appropriately selected patients.

Keywords: lung cancer, robotic segmentectomy, indocyanine green, lingulectomy

Procedia PDF Downloads 46
147 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 134
146 Research on “Three Ports in One” Comprehensive Transportation System of Sea, Land and Airport in Nantong City under the Background of a New Round of Territorial Space Planning

Authors: Ying Sun, Yuxuan Lei

Abstract:

Based on the analysis of the current situation of Nantong's comprehensive transportation system, the interactive relationship between the transportation system and the economy and society is clarified, and then the development strategy for the planning and implementation of the "three ports in one" comprehensive transportation system of ocean, land, and airport is proposed for this round of territorial spatial planning. The research findings are as follows: (1) The comprehensive transportation network system of Nantong City is beginning to take shape, but the lack of a unified and complete system planning makes it difficult to establish a "multi-port integration" pattern with transportation hubs. (2) At the Yangtze River Delta level and Nantong City level, a connected transport node integrating ocean, land, and airport should be built in the transportation construction planning to effectively meet the guidance of the overall territorial space planning of Nantong City. (3) Nantong's comprehensive transportation system and economic society have experienced three interactive development relations in different stages: mutual promotion, geographical separation, and high-level driving. Therefore, the current planning of Nantong's comprehensive transportation system needs to be optimized. The four levels of Nantong city, Shanghai metropolitan area, Yangtze River Delta, and each district, county, and city should be comprehensively considered, and the four development strategies of accelerating construction, dislocation development, active docking, and innovative implementation should be adopted.

Keywords: master plan for territorial space, Integrated transportation system, Nantong, sea, land and air, "Three ports in one"

Procedia PDF Downloads 124
145 Laparoscopic Proximal Gastrectomy in Gastroesophageal Junction Tumours

Authors: Ihab Saad Ahmed

Abstract:

Background For Siewert type I and II gastroesophageal junction tumor (GEJ) laparoscopic proximal gastrectomy can be performed. It is associated with several perioperative benefits compared with open proximal gastrectomy. The use of laparoscopic proximal gastrectomy (LPG) has become an increasingly popular approach for select tumors Methods We describe our technique for LPG, including the preoperative work-up, illustrated images of the main principle steps of the surgery, and our postoperative course. Results Thirteen pts (nine males, four female) with type I, II (GEJ) adenocarcinoma had laparoscopic radical proximal gastrectomy and D2 lymphadenectomy. All of our patient received neoadjuvant chemotherapy, eleven patients had intrathoracic anastomosis through mini thoracotomy (two hand sewn end to end anastomoses and the other 9 patient end to side using circular stapler), two patients with intrathoracic anastomosis had flap and wrap technique, two patients had thoracoscopic esophageal and mediastinal lymph node dissection with cervical anastomosis The mean blood loss 80ml, no cases were converted to open. The mean operative time 250 minute Average LN retrieved 19-25, No sever complication such as leakage, stenosis, pancreatic fistula ,or intra-abdominal abscess were reported. Only One patient presented with empyema 1.5 month after discharge that was managed conservatively. Conclusion For carefully selected patients, LPG in GEJ tumour type I and II is a safe and reasonable alternative for open technique , which is associated with similar oncologic outcomes and low morbidity. It showed less blood loss, respiratory infections, with similar 1- and 3-year survival rates.

Keywords: LPG(laparoscopic proximal gastrectomy, GEJ( gastroesophageal junction tumour), d2 lymphadenectomy, neoadjuvant cth

Procedia PDF Downloads 103
144 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter

Authors: Bartosz Kedra, Robert Malkowski

Abstract:

This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.

Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer

Procedia PDF Downloads 305
143 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.

Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm

Procedia PDF Downloads 286
142 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 121
141 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 60
140 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm

Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio

Abstract:

The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.

Keywords: algorithm, CoAP, DoS, IoT, machine learning

Procedia PDF Downloads 53
139 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 75
138 Dynamic Response around Inclusions in Infinitely Inhomogeneous Media

Authors: Jinlai Bian, Zailin Yang, Guanxixi Jiang, Xinzhu Li

Abstract:

The problem of elastic wave propagation in inhomogeneous medium has always been a classic problem. Due to the frequent occurrence of earthquakes, many economic losses and casualties have been caused, therefore, to prevent earthquake damage to people and reduce damage, this paper studies the dynamic response around the circular inclusion in the whole space with inhomogeneous modulus, the inhomogeneity of the medium is reflected in the shear modulus of the medium with the spatial position, and the density is constant, this method can be used to solve the problem of the underground buried pipeline. Stress concentration phenomena are common in aerospace and earthquake engineering, and the dynamic stress concentration factor (DSCF) is one of the main factors leading to material damage, one of the important applications of the theory of elastic dynamics is to determine the stress concentration in the body with discontinuities such as cracks, holes, and inclusions. At present, the methods include wave function expansion method, integral transformation method, integral equation method and so on. Based on the complex function method, the Helmholtz equation with variable coefficients is standardized by using conformal transformation method and wave function expansion method, the displacement and stress fields in the whole space with circular inclusions are solved in the complex coordinate system, the unknown coefficients are solved by using boundary conditions, by comparing with the existing results, the correctness of this method is verified, based on the superiority of the complex variable function theory to the conformal transformation, this method can be extended to study the inclusion problem of arbitrary shapes. By solving the dynamic stress concentration factor around the inclusions, the influence of the inhomogeneous parameters of the medium and the wavenumber ratio of the inclusions to the matrix on the dynamic stress concentration factor is analyzed. The research results can provide some reference value for the evaluation of nondestructive testing (NDT), oil exploration, seismic monitoring, and soil-structure interaction.

Keywords: circular inclusions, complex variable function, dynamic stress concentration factor (DSCF), inhomogeneous medium

Procedia PDF Downloads 123
137 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 429
136 Eco-Friendly Control of Bacterial Speck on Solanum lycopersicum by Azadirachta indica Extract

Authors: Navodit Goel, Prabir K. Paul

Abstract:

Tomato (Solanum lycopersicum) is attacked by Pseudomonas syringae pv. tomato causing speck lesions on the leaves leading to severe economic casualty. In the present study, aqueous fruit extracts of Azadirachta indica (neem) were sprayed on a single node of tomato plants grown under controlled contamination-free conditions. The treatment of plants was performed with neem fruit extract either alone or along with the pathogen. The parameters of observation were activities of polyphenol oxidase (PPO) and lysozyme, and isoform analysis of PPO; both at the treated leaves as well as untreated leaves away from the site of extract application. Polyphenol oxidase initiates phenylpropanoid pathway resulting in the synthesis of quinines from cytoplasmic phenols and production of reactive oxygen species toxic to broad spectrum microbes. Lysozyme is responsible for the breakdown of bacterial cell wall. The results indicate the upregulation of PPO and lysozyme activities in both the treated and untreated leaves along with de novo expression of newer PPO isoenzymes (which were absent in control samples). The appearance of additional PPO isoenzymes in bioelicitor-treated plants indicates that either the isoenzymes were expressed after bioelicitor application or the already expressed but inactive isoenzymes were activated by it. Lysozyme activity was significantly increased in the plants when treated with the bioelicitor or the pathogen alone. However, no new isoenzymes of lysozyme were expressed upon application of the extract. Induction of resistance by neem fruit extract could be a potent weapon in eco-friendly plant protection strategies.

Keywords: Azadirachta indica, lysozyme, polyphenol oxidase, Solanum lycopersicum

Procedia PDF Downloads 266
135 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 300
134 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 138
133 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients

Authors: Mahsa Taghavi

Abstract:

In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.

Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs

Procedia PDF Downloads 116
132 Physiological Normoxia and Cellular Adhesion of Diffuse Large B-Cell Lymphoma Primary Cells: Real-Time PCR and Immunohistochemistry Study

Authors: Kamila Duś-Szachniewicz, Kinga M. Walaszek, Paweł Skiba, Paweł Kołodziej, Piotr Ziółkowski

Abstract:

Cell adhesion is of fundamental importance in the cell communication, signaling, and motility, and its dysfunction occurs prevalently during cancer progression. The knowledge of the molecular and cellular processes involved in abnormalities in cancer cells adhesion has greatly increased, and it has been focused mainly on cellular adhesion molecules (CAMs) and tumor microenvironment. Unfortunately, most of the data regarding CAMs expression relates to study on cells maintained in standard oxygen condition of 21%, while the emerging evidence suggests that culturing cells in ambient air is far from physiological. In fact, oxygen in human tissues ranges from 1 to 11%. The aim of this study was to compare the effects of physiological lymph node normoxia (5% O2), and hyperoxia (21% O2) on the expression of cellular adhesion molecules of primary diffuse large B-cell lymphoma cells (DLBCL) isolated from 10 lymphoma patients. Quantitative RT-PCR and immunohistochemistry were used to confirm the differential expression of several CAMs, including ICAM, CD83, CD81, CD44, depending on the level of oxygen. Our findings also suggest that DLBCL cells maintained at ambient O2 (21%) exhibit reduced growth rate and migration ability compared to the cells growing in normoxia conditions. Taking into account all the observations, we emphasize the need to identify the optimal human cell culture conditions mimicking the physiological aspects of tumor growth and differentiation.

Keywords: adhesion molecules, diffuse large B-cell lymphoma, physiological normoxia, quantitative RT-PCR

Procedia PDF Downloads 263
131 Analysis of Network Connectivity for Ship-To-Ship Maritime Communication Using IEEE 802.11 on Maritime Environment of Tanjung Perak, Indonesia

Authors: Ahmad Fauzi Makarim, Okkie Puspitorini, Hani'ah Mahmudah, Nur Adi Siswandari, Ari Wijayanti

Abstract:

As a maritime country, Indonesia needs a solution in maritime connectivity which can assist the maritime communication system which including communication from harbor to the ship or ship to ship. The needs of many application services for maritime communication, whether for safety reasons until voyage service to help the process of voyage activity needs connection with a high bandwith. To support the government efforts in handling that kind of problem, a research is conducted in maritime communication issue by applying the new developed technology in Indonesia, namely IEEE 802.11. In this research, 3 outdoor WiFi devices are used in which have a frequency of 5.8 GHz. Maritime of Tanjung Perak harbor in Surabaya until Karang Jamuang Island are used as the location of the research with defining permission of ship node spreading by Navigation District Class 1. That maritime area formed by state 1 and state 2 areas which are the narrow area with average wave height of 0.7 meter based on the data from BMKG S urabaya. After that, wave height used as one of the parameters which are used in analyzing characteristic of signal propagation at sea surface, so it can be determined on the coverage area of transmitter system. In this research has been used three samples of outdoor wifi, there is the coverage of device A can be determined about 2256 meter, device B 4000 meter, and device C 1174 meter. Then to analyze of network connectivity for the ship to ship is used AODV routing algorithm system based on the value of the power transmit was smallest of all nodes within the transmitter coverage.

Keywords: maritime of Indonesia, maritime communications, outdoor wifi, coverage, AODV

Procedia PDF Downloads 333
130 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: data compression, ultrasonic communication, guided waves, FEM analysis

Procedia PDF Downloads 110
129 The Analysis of Personalized Low-Dose Computed Tomography Protocol Based on Cumulative Effective Radiation Dose and Cumulative Organ Dose for Patients with Breast Cancer with Regular Chest Computed Tomography Follow up

Authors: Okhee Woo

Abstract:

Purpose: The aim of this study is to evaluate 2-year cumulative effective radiation dose and cumulative organ dose on regular follow-up computed tomography (CT) scans in patients with breast cancer and to establish personalized low-dose CT protocol. Methods and Materials: A retrospective study was performed on the patients with breast cancer who were diagnosed and managed consistently on the basis of routine breast cancer follow-up protocol between 2012-01 and 2016-06. Based on ICRP (International Commission on Radiological Protection) 103, the cumulative effective radiation doses of each patient for 2-year follow-up were analyzed using the commercial radiation management software (Radimetrics, Bayer healthcare). The personalized effective doses on each organ were analyzed in detail by the software-providing Monte Carlo simulation. Results: A total of 3822 CT scans on 490 patients was evaluated (age: 52.32±10.69). The mean scan number for each patient was 7.8±4.54. Each patient was exposed 95.54±63.24 mSv of radiation for 2 years. The cumulative CT radiation dose was significantly higher in patients with lymph node metastasis (p = 0.00). The HER-2 positive patients were more exposed to radiation compared to estrogen or progesterone receptor positive patient (p = 0.00). There was no difference in the cumulative effective radiation dose with different age groups. Conclusion: To acknowledge how much radiation exposed to a patient is a starting point of management of radiation exposure for patients with long-term CT follow-up. The precise and personalized protocol, as well as iterative reconstruction, may reduce hazard from unnecessary radiation exposure.

Keywords: computed tomography, breast cancer, effective radiation dose, cumulative organ dose

Procedia PDF Downloads 171
128 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 53