Search results for: threshold graphs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1090

Search results for: threshold graphs

910 Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array

Authors: Hao Cheng, Zhiwu Wang, Guozheng Yan, Pingping Jiang, Shijia Qin, Shuai Kuang

Abstract:

Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm.

Keywords: AFI, edge detection, adaptive threshold, morphological processing, OV6930, FPGA

Procedia PDF Downloads 230
909 Two-Dimensional Material-Based Negative Differential Resistance Device with High Peak-to- Valley Current Ratio for Multi-Valued Logic Circuits

Authors: Kwan-Ho Kim, Jin-Hong Park

Abstract:

The multi-valued logic (MVL) circuits, which can handle more than two logic states, are one of the promising solutions to overcome the bit density limitations of conventional binary logic systems. Recently, tunneling devices such as Esaki diode and resonant tunneling diode (RTD) have been extensively explored to construct the MVL circuits. These tunneling devices present a negative differential resistance (NDR) phenomenon in which a current decreases as a voltage increases in a specific applied voltage region. Due to this non-monotonic current behavior, the tunneling devices have more than two threshold voltages, consequently enabling construction of MVL circuits. Recently, the emergence of two dimensional (2D) van der Waals (vdW) crystals has opened up the possibility to fabricate such tunneling devices easily. Owing to the defect-free surface of the 2D crystals, a very abrupt junction interface could be formed through a simple stacking process, which subsequently allowed the implementation of a high-performance tunneling device. Here, we report a vdW heterostructure based tunneling device with multiple threshold voltages, which was fabricated with black phosphorus (BP) and hafnium diselenide (HfSe₂). First, we exfoliated BP on the SiO₂ substrate and then transferred HfSe₂ on BP using dry transfer method. The BP and HfSe₂ form type-Ⅲ heterojunction so that the highly doped n+/p+ interface can be easily implemented without additional electrical or chemical doping process. Owing to high natural doping at the junction, record high peak to valley ratio (PVCR) of 16 was observed to the best our knowledge in 2D materials based NDR device. Furthermore, based on this, we first demonstrate the feasibility of the ternary latch by connecting two multi-threshold voltage devices in series.

Keywords: two dimensional van der Waals crystal, multi-valued logic, negative differential resistnace, tunneling device

Procedia PDF Downloads 213
908 PVMODREL© Development Based on Reliability Evaluation of a PV Module Using Accelerated Degradation Testing

Authors: Abderafi Charki, David Bigaud

Abstract:

The aim of this oral speach is to present the PVMODREL© (PhotoVoltaic MODule RELiability) new software developed in the University of Angers. This new tool permits us to evaluate the lifetime and reliability of a PV module whatever its geographical location and environmental conditions. The electrical power output of a PV module decreases with time mainly as a result of the effects of corrosion, encapsulation discoloration, and solder bond failure. The failure of a PV module is defined as the point where the electrical power degradation reaches a given threshold value. Accelerated life tests (ALTs) are commonly used to assess the reliability of a PV module. However, ALTs provide limited data on the failure of a module and these tests are expensive to carry out. One possible solution is to conduct accelerated degradation tests. The Wiener process in conjunction with the accelerated failure time model makes it possible to carry out numerous simulations and thus to determine the failure time distribution based on the aforementioned threshold value. By this means, the failure time distribution and the lifetime (mean and uncertainty) can be evaluated. An example using the damp heat test is shown to demonstrate the usefulness PVMODREL.

Keywords: lifetime, reliability, PV Module, accelerated life testing, accelerated degradation testing

Procedia PDF Downloads 574
907 Study of Storms on the Javits Center Green Roof

Authors: Alexander Cho, Harsho Sanyal, Joseph Cataldo

Abstract:

A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.

Keywords: green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter

Procedia PDF Downloads 114
906 A Guide to User-Friendly Bash Prompt: Adding Natural Language Processing Plus Bash Explanation to the Command Interface

Authors: Teh Kean Kheng, Low Soon Yee, Burra Venkata Durga Kumar

Abstract:

In 2022, as the future world becomes increasingly computer-related, more individuals are attempting to study coding for themselves or in school. This is because they have discovered the value of learning code and the benefits it will provide them. But learning coding is difficult for most people. Even senior programmers that have experience for a decade year still need help from the online source while coding. The reason causing this is that coding is not like talking to other people; it has the specific syntax to make the computer understand what we want it to do, so coding will be hard for normal people if they don’t have contact in this field before. Coding is hard. If a user wants to learn bash code with bash prompt, it will be harder because if we look at the bash prompt, we will find that it is just an empty box and waiting for a user to tell the computer what we want to do, if we don’t refer to the internet, we will not know what we can do with the prompt. From here, we can conclude that the bash prompt is not user-friendly for new users who are learning bash code. Our goal in writing this paper is to give an idea to implement a user-friendly Bash prompt in Ubuntu OS using Artificial Intelligent (AI) to lower the threshold of learning in Bash code, to make the user use their own words and concept to write and learn Bash code.

Keywords: user-friendly, bash code, artificial intelligence, threshold, semantic similarity, lexical similarity

Procedia PDF Downloads 142
905 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 155
904 Flood Monitoring in the Vietnamese Mekong Delta Using Sentinel-1 SAR with Global Flood Mapper

Authors: Ahmed S. Afifi, Ahmed Magdy

Abstract:

Satellite monitoring is an essential tool to study, understand, and map large-scale environmental changes that affect humans, climate, and biodiversity. The Sentinel-1 Synthetic Aperture Radar (SAR) instrument provides a high collection of data in all-weather, short revisit time, and high spatial resolution that can be used effectively in flood management. Floods occur when an overflow of water submerges dry land that requires to be distinguished from flooded areas. In this study, we use global flood mapper (GFM), a new google earth engine application that allows users to quickly map floods using Sentinel-1 SAR. The GFM enables the users to adjust manually the flood map parameters, e.g., the threshold for Z-value for VV and VH bands and the elevation and slope mask threshold. The composite R:G:B image results by coupling the bands of Sentinel-1 (VH:VV:VH) reduces false classification to a large extent compared to using one separate band (e.g., VH polarization band). The flood mapping algorithm in the GFM and the Otsu thresholding are compared with Sentinel-2 optical data. And the results show that the GFM algorithm can overcome the misclassification of a flooded area in An Giang, Vietnam.

Keywords: SAR backscattering, Sentinel-1, flood mapping, disaster

Procedia PDF Downloads 104
903 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 376
902 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 155
901 Optimizing Power in Sequential Circuits by Reducing Leakage Current Using Enhanced Multi Threshold CMOS

Authors: Patikineti Sreenivasulu, K. srinivasa Rao, A. Vinaya Babu

Abstract:

The demand for portability, performance and high functional integration density of digital devices leads to the scaling of complementary metal oxide semiconductor (CMOS) devices inevitable. The increase in power consumption, coupled with the increasing demand for portable/hand-held electronics, has made power consumption a dominant concern in the design of VLSI circuits today. MTCMOS technology provides low leakage and high performance operation by utilizing high speed, low Vt (LVT) transistors for logic cells and low leakage, high Vt (HVT) devices as sleep transistors. Sleep transistors disconnect logic cells from the supply and/or ground to reduce the leakage in the sleep mode. In this technology, energy consumption while doing the mode transition and minimum time required to turn ON the circuit upon receiving the wake up signal are issues to be considered because these can adversely impact the performance of VLSI circuit. In this paper we are introducing an enhancing method of MTCMOS technology to optimize the power in MTCMOS sequential circuits.

Keywords: power consumption, ultra-low power, leakage, sub threshold, MTCMOS

Procedia PDF Downloads 406
900 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 340
899 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 158
898 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variances are known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the method of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: estimation after selection, Brewster-Zidek technique, estimators, selected populations

Procedia PDF Downloads 512
897 Automatic Integrated Inverter Type Smart Device for Safe Kitchen

Authors: K. M. Jananni, R. Nandini

Abstract:

The proposed wireless, inverter type design of a LPG leakage monitoring system aims to provide a smart and safe kitchen. The system detects the LPG gas leak using Nano-sensors and alerts the concerned individual through GSM system. The system uses two sensors, one attached to the chimney and other to the regulator of the LPG cylinder. Upon a leakage being detected, the sensor at the regulator actuates the system to cut off the gas supply immediately using a solenoid control valve. The sensor at the chimney checks for the permissible level of LPG mix in the air and when the level exceeds the threshold, the system sends an automatic SMS to the numbers saved. Further the sensor actuates the mini suction system fixed at the chimney within 20 seconds of a leakage to suck out the gas until the level falls well below the threshold. As a safety measure, an automatic window opening and alarm feature is also incorporated into the system. The key feature of this design is that the system is provided with a special inverter designed to make the device function effectively even during power failures. In this paper, utilization of sensors in the kitchen area is discussed and this gives the proposed architecture for real time field monitoring with a PIC Micro-controller.

Keywords: nano sensors, global system for mobile communication, GSM, micro controller, inverter

Procedia PDF Downloads 473
896 Test of Moisture Sensor Activation Speed

Authors: I. Parkova, A. Vališevskis, A. Viļumsone

Abstract:

Nocturnal enuresis or bed-wetting is intermittent incontinence during sleep of children after age 5 that may precipitate wide range of behavioural and developmental problems. One of the non-pharmacological treatment methods is the use of a bed-wetting alarm system. In order to improve comfort conditions of nocturnal enuresis alarm system, modular moisture sensor should be replaced by a textile sensor. In this study behaviour and moisture detection speed of woven and sewn sensors were compared by analysing change in electrical resistance after solution (salt water) was dripped on sensor samples. Material of samples has different structure and yarn location, which affects solution detection rate. Sensor system circuit was designed and two sensor tests were performed: system activation test and false alarm test to determine the sensitivity of the system and activation threshold. Sewn sensor had better result in system’s activation test – faster reaction, but woven sensor had better result in system’s false alarm test – it was less sensitive to perspiration simulation. After experiments it was found that the optimum switching threshold is 3V in case of 5V input voltage, which provides protection against false alarms, for example – during intensive sweating.

Keywords: conductive yarns, moisture textile sensor, industry, material

Procedia PDF Downloads 246
895 Epidemiological Analysis of Measles Outbreak in North-Kazakhstan Region of the Republic of Kazakhstan

Authors: Fatima Meirkhankyzy Shaizadina, Alua Oralovna Omarova, Praskovya Mikhailovna Britskaya, Nessipkul Oryntayevna Alysheva

Abstract:

In recent years in the Republic of Kazakhstan there have been registered outbreaks of measles among the population. The objective of work was the analysis of outbreak of measles in 2014 among the population of North-Kazakhstan region of the Republic of Kazakhstan. For the analysis of the measles outbreak descriptive and analytical research, techniques were used and threshold levels of morbidity were calculated. The increase of incidence was noted from March to July. The peak was registered in May and made 9.0 per 100000 population. High rates were registered in April – 5.7 per 100000 population, and in June and July they made 5.7 and 3.1 respectively. Duration of the period of increase made 5 months. The analysis of monthly incidence of measles revealed spring and summer seasonality. Across the territory it was established that 69.2% of cases were registered in the city, 29.1% in rural areas and 1.7% of cases were brought in from other regions of Kazakhstan. The registered cases and threshold values of measles during the outbreak revealed that from 12 to 24 week, and also during the 40th week the cases exceeding the threshold levels are registered. Thus, for example, for the analyzed 1 week the number of the revealed patients made 4, which exceeds the calculated threshold value (3) by 33.3%. The data exceeding the threshold values confirm the emergence of a disease outbreak or the beginning of epidemic rise in morbidity. Epidemic rise in incidence of the population of North-Kazakhstan region was observed throughout 2014. The risk group includes 0-4 year-old children, who made 22.7%, 15-19 year-olds – 25.6%, 20-24 year-olds – 20.9%. The analysis of measles cases registration by gender revealed that women are registered 1.1 times more often than men. The ratio of women to men made 1:0.87. In social and professional groups often ill are unorganized children – 23.3% and students – 19.8%. Studying clinical manifestations of measles in the hospitalized patients, the typical beginning of a disease with expressed intoxication symptoms – weakness, sickliness was established. In individual cases expressed intoxication symptoms, hemorrhagic and dyspeptic syndromes, complications in the form of overlay of a secondary bacterial infection, which defined high severity of the illness, were registered both in adults and in children. The average duration of stay of patients in the hospital made 6.9 days. The average duration of time between date of getting the disease and date of delivery of health care made 3.6 days. Thus, the analysis of monthly incidence of measles revealed spring and summer seasonality, the peak of which was registered in May. Urban dwellers are ill more often (69.2%), while in rural areas people are ill more rarely (29.1%). Throughout 2014 an epidemic rise in incidence of the population of North-Kazakhstan region was observed. Risk group includes: children under 4 – 22.7%, 15-19 year-olds – 25.6%, 20-24 year-olds – 20.9%. The ratio of women and men made 1:0.87. The typical beginning of a disease in all hospitalized with the expressed intoxication symptoms – weakness, sickliness was established.

Keywords: epidemiological analysis, measles, morbidity, outbreak

Procedia PDF Downloads 223
894 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 68
893 Investigation of Various Physical and Physiological Properties of Ethiopian Elite Men Distances Runners

Authors: Getaye Fisseha Gelaw

Abstract:

The purpose of this study was to investigate the key physical and physiological characteristics of 16 elite male Ethiopian national team distance runners, who have an average age of 28.1±4.3 years, a height of 175.0 ±5.6 cm, a weight of 59.1 ±3.9 kg, a BMI of 19.6 ±1.5, and training age of 10.1 ±5.1 yrs. The average weekly distance is 196.3±13.8 km, the average 10,000m time is 27:14±0.5 min sec, the average half marathon time is 59:30±0.6 min sec, the average marathon time is 2hr 03min 39sec±0.02. In addition, the average Cooper test (12-minute run test) is 4525.4±139.7 meters, and the average VO2 max is 90.8±3.1ml/kg/m. All athletes have a high profile and compete on the international label, and according to the World Athletics athletes' ranking system in 2021, 56.3% of the 16 participants were platinum label status, while the remaining 43.7 % were gold label status-completed an incremental treadmill test for the assessment of VO2peak, submaximal running, lactate threshold and test during which they ran continuously at 21 km/h. The laboratory determined VO2peak was 91.4 ± 1.7 mL/kg/min with anaerobic threshold of 74.2±1.6 mL/min/Kg and VO2max 81%. The speed at the AT is 15.9 ±0.6 Kmh and the altitude is 4,0%. The respiratory compensation RC point was reached at 88.7±1.1 mL/min/Kg and 97% of VO2 max. On RCP, the speed is 17.6 ±0.4 km/h and the altitude/slope are 5.5% percent, and the speed at Maximum effort is 19.5 ±1.5 and the elevation is 6.0%. The data also suggest that Ethiopian distance top athletes have considerably higher VO2 max values than those found in earlier research.

Keywords: long-distance running, Ethiopians, VO2 max, world athletics, anthropometric

Procedia PDF Downloads 128
892 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 64
891 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 159
890 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital

Authors: Puntarikorn Rungrattanakasin

Abstract:

Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.

Keywords: trigger tool, warfarin, risk of bleeding, medical wards

Procedia PDF Downloads 148
889 Safe Limits Concentration of Ammonia at Work Environments through CD8 Expression in Rats

Authors: Abdul Rohim Tualeka, Erick Caravan K. Betekeneng, Ramdhoni Zuhro, Reko Triyono, M. Sahri

Abstract:

It has been widely reported incidence caused by acute and chronic effects of exposure to ammonia in the working environment in Indonesia, but ammonia concentration was found to be below the threshold value. The purpose of this study was to determine the safety limit concentration of ammonia in the working environment through the expression of CD8 as a reference for determining the threshold value of ammonia in the working environment. This research was a laboratory experimental with post test only control group design using experimental animals as subjects experiment. From homogeneity test results indicated that the weight of white rats exposed and control groups had a homogeneous variant with a significant level of p (0.701) > α (0.05). Description of the average breathing rate is 0.0013 m³/h. Average weight rats based group listed exposure is 0.1405 kg. From the calculation IRS CD8, CD8 highest score in the doses contained 0.0154, with the location of the highest dose of ammonia without any effect on the lungs of rats is 0.0154 mg/kg body weight of mice. Safe Human Dose (SHD) ammonia is 0.002 mg/kg body weight workers. The conclusion of this study is the safety limit concentration of ammonia gas in the working environment of 0,025 ppm.

Keywords: ammonia, CD8, rats, safe limits concentration

Procedia PDF Downloads 222
888 Evaluating Reliability Indices in 3 Critical Feeders at Lorestan Electric Power Distribution Company

Authors: Atefeh Pourshafie, Homayoun Bakhtiari

Abstract:

The main task of power distribution companies is to supply the power required by customers in an acceptable level of quality and reliability. Some key performance indicators for electric power distribution companies are those evaluating the continuity of supply within the network. More than other problems, power outages (due to lightning, flood, fire, earthquake, etc.) challenge economy and business. In addition, end users expect a reliable power supply. Reliability indices are evaluated on an annual basis by the specialized holding company of Tavanir (Power Produce, Transmission& distribution company of Iran) . Evaluation of reliability indices is essential for distribution companies, and with regard to the privatization of distribution companies, it will be of particular importance to evaluate these indices and to plan for their improvement in a not too distant future. According to IEEE-1366 standard, there are too many indices; however, the most common reliability indices include SAIFI, SAIDI and CAIDI. These indices describe the period and frequency of blackouts in the reporting period (annual or any desired timeframe). This paper calculates reliability indices for three sample feeders in Lorestan Electric Power Distribution Company and defines the threshold values in a ten-month period. At the end, strategies are introduced to reach the threshold values in order to increase customers' satisfaction.

Keywords: power, distribution network, reliability, outage

Procedia PDF Downloads 472
887 5-[Aryloxypyridyl (or Nitrophenyl)]-4H-1,2,4-Triazoles as Flexible Benzodiazepine Analogs: Synthesis, Receptor Binding Affinity and the Lipophilicity-Dependent Anti-Seizure Onset of Action

Authors: Latifeh Navidpour, Shabnam Shabani, Alireza Heidari, Manouchehr Bashiri, Azadeh Ebrahim-Habibi, Soraya Shahhosseini, Hamed Shafaroodi, Sayyed Abbas Tabatabai, Mahsa Toolabi

Abstract:

A new series of 5-(2-aryloxy-4-nitrophenyl)-4H-1,2,4-triazoles and 5-(2-aryloxy-3-pyridyl)-4H-1,2,4-triazoles, possessing C-3 thio or alkylthio substituents, was synthesized and evaluated for their benzodiazepine receptor affinity and anti-seizure activity. These analogues revealed similar to significantly superior affinity to GABAA/ benzodiazepine receptor complex (IC50 values of 0.04–4.1 nM), relative to diazepam as the reference drug (IC50 value of 2.4 nM). To determine the onset of anti-seizure activity, the time-dependent effectiveness of i.p. administration of compounds on pentylenetetrazole induced seizure threshold was studied and a very good relationship was observed between the lipophilicity (cLogP) and onset of action of studied analogues (r2 = 0.964). The minimum effective dose of the compounds, determined at the time the analogues showed their highest activity, was demonstrated to be 0.025–0.1 mg/kg, relative to diazepam (0.025 mg/kg).

Keywords: 1, 2, 4-triazole, flexible benzodiazepines, GABAA/bezodiazepine receptor complex, onset of action, PTZ induced seizure threshold

Procedia PDF Downloads 104
886 The Long-Run Impact of Financial Development on Greenhouse Gas Emissions in India: An Application of Regime Shift Based Cointegration Approach

Authors: Javaid Ahmad Dar, Mohammad Asif

Abstract:

The present study investigates the long-run impact of financial development, energy consumption and economic growth on greenhouse gas emissions for India, in presence of endogenous structural breaks, over a period of 1971-2013. Autoregressive distributed lag bounds testing procedure and Hatemi-J threshold cointegration technique have been used to test the variables for cointegration. ARDL bounds test did not confirm any cointegrating relationship between the variables. The threshold cointegration test establishes the presence of long-run impact of financial development, energy use and economic growth on greenhouse gas emissions in India. The results reveal that the long-run relationship between the variables has witnessed two regime shifts, in 1978 and 2002. The empirical evidence shows that financial sector development and energy consumption in India degrade environment. Unlike previous studies, this paper finds no statistical evidence of long-run relationship between economic growth and environmental deterioration. The study also challenges the existence of environmental Kuznets curve in India.

Keywords: cointegration, financial development, global warming, greenhouse gas emissions, regime shift, unit root

Procedia PDF Downloads 380
885 Application of Simulated Annealing to Threshold Optimization in Distributed OS-CFAR System

Authors: L. Abdou, O. Taibaoui, A. Moumen, A. Talib Ahmed

Abstract:

This paper proposes an application of the simulated annealing to optimize the detection threshold in an ordered statistics constant false alarm rate (OS-CFAR) system. Using conventional optimization methods, such as the conjugate gradient, can lead to a local optimum and lose the global optimum. Also for a system with a number of sensors that is greater than or equal to three, it is difficult or impossible to find this optimum; Hence, the need to use other methods, such as meta-heuristics. From a variety of meta-heuristic techniques, we can find the simulated annealing (SA) method, inspired from a process used in metallurgy. This technique is based on the selection of an initial solution and the generation of a near solution randomly, in order to improve the criterion to optimize. In this work, two parameters will be subject to such optimisation and which are the statistical order (k) and the scaling factor (T). Two fusion rules; “AND” and “OR” were considered in the case where the signals are independent from sensor to sensor. The results showed that the application of the proposed method to the problem of optimisation in a distributed system is efficiency to resolve such problems. The advantage of this method is that it allows to browse the entire solutions space and to avoid theoretically the stagnation of the optimization process in an area of local minimum.

Keywords: distributed system, OS-CFAR system, independent sensors, simulating annealing

Procedia PDF Downloads 497
884 Developing Artificial Neural Networks (ANN) for Falls Detection

Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai

Abstract:

The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.

Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold

Procedia PDF Downloads 496
883 Detecting and Thwarting Interest Flooding Attack in Information Centric Network

Authors: Vimala Rani P, Narasimha Malikarjunan, Mercy Shalinie S

Abstract:

Data Networking was brought forth as an instantiation of information-centric networking. The attackers can send a colossal number of spoofs to take hold of the Pending Interest Table (PIT) named an Interest Flooding attack (IFA) since the in- interests are recorded in the PITs of the intermediate routers until they receive corresponding Data Packets are go beyond the time limit. These attacks can be detrimental to network performance. PIT expiration rate or the Interest satisfaction rate, which cannot differentiate the IFA from attacks, is the criterion Traditional IFA detection techniques are concerned with. Threshold values can casually affect Threshold-based traditional methods. This article proposes an accurate IFA detection mechanism based on a Multiple Feature-based Extreme Learning Machine (MF-ELM). Accuracy of the attack detection can be increased by presenting the entropy of Internet names, Interest satisfaction rate and PIT usage as features extracted in the MF-ELM classifier. Furthermore, we deploy a queue-based hostile Interest prefix mitigation mechanism. The inference of this real-time test bed is that the mechanism can help the network to resist IFA with higher accuracy and efficiency.

Keywords: information-centric network, pending interest table, interest flooding attack, MF-ELM classifier, queue-based mitigation strategy

Procedia PDF Downloads 205
882 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 201
881 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: breakage, computer vision, husking, rice kernel

Procedia PDF Downloads 381