Search results for: threshold graphs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1046

Search results for: threshold graphs

866 Study of Storms on the Javits Center Green Roof

Authors: Alexander Cho, Harsho Sanyal, Joseph Cataldo

Abstract:

A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.

Keywords: green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter

Procedia PDF Downloads 84
865 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 350
864 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 130
863 Optimizing Power in Sequential Circuits by Reducing Leakage Current Using Enhanced Multi Threshold CMOS

Authors: Patikineti Sreenivasulu, K. srinivasa Rao, A. Vinaya Babu

Abstract:

The demand for portability, performance and high functional integration density of digital devices leads to the scaling of complementary metal oxide semiconductor (CMOS) devices inevitable. The increase in power consumption, coupled with the increasing demand for portable/hand-held electronics, has made power consumption a dominant concern in the design of VLSI circuits today. MTCMOS technology provides low leakage and high performance operation by utilizing high speed, low Vt (LVT) transistors for logic cells and low leakage, high Vt (HVT) devices as sleep transistors. Sleep transistors disconnect logic cells from the supply and/or ground to reduce the leakage in the sleep mode. In this technology, energy consumption while doing the mode transition and minimum time required to turn ON the circuit upon receiving the wake up signal are issues to be considered because these can adversely impact the performance of VLSI circuit. In this paper we are introducing an enhancing method of MTCMOS technology to optimize the power in MTCMOS sequential circuits.

Keywords: power consumption, ultra-low power, leakage, sub threshold, MTCMOS

Procedia PDF Downloads 380
862 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 312
861 Automatic Integrated Inverter Type Smart Device for Safe Kitchen

Authors: K. M. Jananni, R. Nandini

Abstract:

The proposed wireless, inverter type design of a LPG leakage monitoring system aims to provide a smart and safe kitchen. The system detects the LPG gas leak using Nano-sensors and alerts the concerned individual through GSM system. The system uses two sensors, one attached to the chimney and other to the regulator of the LPG cylinder. Upon a leakage being detected, the sensor at the regulator actuates the system to cut off the gas supply immediately using a solenoid control valve. The sensor at the chimney checks for the permissible level of LPG mix in the air and when the level exceeds the threshold, the system sends an automatic SMS to the numbers saved. Further the sensor actuates the mini suction system fixed at the chimney within 20 seconds of a leakage to suck out the gas until the level falls well below the threshold. As a safety measure, an automatic window opening and alarm feature is also incorporated into the system. The key feature of this design is that the system is provided with a special inverter designed to make the device function effectively even during power failures. In this paper, utilization of sensors in the kitchen area is discussed and this gives the proposed architecture for real time field monitoring with a PIC Micro-controller.

Keywords: nano sensors, global system for mobile communication, GSM, micro controller, inverter

Procedia PDF Downloads 443
860 Test of Moisture Sensor Activation Speed

Authors: I. Parkova, A. Vališevskis, A. Viļumsone

Abstract:

Nocturnal enuresis or bed-wetting is intermittent incontinence during sleep of children after age 5 that may precipitate wide range of behavioural and developmental problems. One of the non-pharmacological treatment methods is the use of a bed-wetting alarm system. In order to improve comfort conditions of nocturnal enuresis alarm system, modular moisture sensor should be replaced by a textile sensor. In this study behaviour and moisture detection speed of woven and sewn sensors were compared by analysing change in electrical resistance after solution (salt water) was dripped on sensor samples. Material of samples has different structure and yarn location, which affects solution detection rate. Sensor system circuit was designed and two sensor tests were performed: system activation test and false alarm test to determine the sensitivity of the system and activation threshold. Sewn sensor had better result in system’s activation test – faster reaction, but woven sensor had better result in system’s false alarm test – it was less sensitive to perspiration simulation. After experiments it was found that the optimum switching threshold is 3V in case of 5V input voltage, which provides protection against false alarms, for example – during intensive sweating.

Keywords: conductive yarns, moisture textile sensor, industry, material

Procedia PDF Downloads 225
859 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 131
858 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variances are known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the method of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: estimation after selection, Brewster-Zidek technique, estimators, selected populations

Procedia PDF Downloads 478
857 Epidemiological Analysis of Measles Outbreak in North-Kazakhstan Region of the Republic of Kazakhstan

Authors: Fatima Meirkhankyzy Shaizadina, Alua Oralovna Omarova, Praskovya Mikhailovna Britskaya, Nessipkul Oryntayevna Alysheva

Abstract:

In recent years in the Republic of Kazakhstan there have been registered outbreaks of measles among the population. The objective of work was the analysis of outbreak of measles in 2014 among the population of North-Kazakhstan region of the Republic of Kazakhstan. For the analysis of the measles outbreak descriptive and analytical research, techniques were used and threshold levels of morbidity were calculated. The increase of incidence was noted from March to July. The peak was registered in May and made 9.0 per 100000 population. High rates were registered in April – 5.7 per 100000 population, and in June and July they made 5.7 and 3.1 respectively. Duration of the period of increase made 5 months. The analysis of monthly incidence of measles revealed spring and summer seasonality. Across the territory it was established that 69.2% of cases were registered in the city, 29.1% in rural areas and 1.7% of cases were brought in from other regions of Kazakhstan. The registered cases and threshold values of measles during the outbreak revealed that from 12 to 24 week, and also during the 40th week the cases exceeding the threshold levels are registered. Thus, for example, for the analyzed 1 week the number of the revealed patients made 4, which exceeds the calculated threshold value (3) by 33.3%. The data exceeding the threshold values confirm the emergence of a disease outbreak or the beginning of epidemic rise in morbidity. Epidemic rise in incidence of the population of North-Kazakhstan region was observed throughout 2014. The risk group includes 0-4 year-old children, who made 22.7%, 15-19 year-olds – 25.6%, 20-24 year-olds – 20.9%. The analysis of measles cases registration by gender revealed that women are registered 1.1 times more often than men. The ratio of women to men made 1:0.87. In social and professional groups often ill are unorganized children – 23.3% and students – 19.8%. Studying clinical manifestations of measles in the hospitalized patients, the typical beginning of a disease with expressed intoxication symptoms – weakness, sickliness was established. In individual cases expressed intoxication symptoms, hemorrhagic and dyspeptic syndromes, complications in the form of overlay of a secondary bacterial infection, which defined high severity of the illness, were registered both in adults and in children. The average duration of stay of patients in the hospital made 6.9 days. The average duration of time between date of getting the disease and date of delivery of health care made 3.6 days. Thus, the analysis of monthly incidence of measles revealed spring and summer seasonality, the peak of which was registered in May. Urban dwellers are ill more often (69.2%), while in rural areas people are ill more rarely (29.1%). Throughout 2014 an epidemic rise in incidence of the population of North-Kazakhstan region was observed. Risk group includes: children under 4 – 22.7%, 15-19 year-olds – 25.6%, 20-24 year-olds – 20.9%. The ratio of women and men made 1:0.87. The typical beginning of a disease in all hospitalized with the expressed intoxication symptoms – weakness, sickliness was established.

Keywords: epidemiological analysis, measles, morbidity, outbreak

Procedia PDF Downloads 198
856 Investigation of Various Physical and Physiological Properties of Ethiopian Elite Men Distances Runners

Authors: Getaye Fisseha Gelaw

Abstract:

The purpose of this study was to investigate the key physical and physiological characteristics of 16 elite male Ethiopian national team distance runners, who have an average age of 28.1±4.3 years, a height of 175.0 ±5.6 cm, a weight of 59.1 ±3.9 kg, a BMI of 19.6 ±1.5, and training age of 10.1 ±5.1 yrs. The average weekly distance is 196.3±13.8 km, the average 10,000m time is 27:14±0.5 min sec, the average half marathon time is 59:30±0.6 min sec, the average marathon time is 2hr 03min 39sec±0.02. In addition, the average Cooper test (12-minute run test) is 4525.4±139.7 meters, and the average VO2 max is 90.8±3.1ml/kg/m. All athletes have a high profile and compete on the international label, and according to the World Athletics athletes' ranking system in 2021, 56.3% of the 16 participants were platinum label status, while the remaining 43.7 % were gold label status-completed an incremental treadmill test for the assessment of VO2peak, submaximal running, lactate threshold and test during which they ran continuously at 21 km/h. The laboratory determined VO2peak was 91.4 ± 1.7 mL/kg/min with anaerobic threshold of 74.2±1.6 mL/min/Kg and VO2max 81%. The speed at the AT is 15.9 ±0.6 Kmh and the altitude is 4,0%. The respiratory compensation RC point was reached at 88.7±1.1 mL/min/Kg and 97% of VO2 max. On RCP, the speed is 17.6 ±0.4 km/h and the altitude/slope are 5.5% percent, and the speed at Maximum effort is 19.5 ±1.5 and the elevation is 6.0%. The data also suggest that Ethiopian distance top athletes have considerably higher VO2 max values than those found in earlier research.

Keywords: long-distance running, Ethiopians, VO2 max, world athletics, anthropometric

Procedia PDF Downloads 102
855 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 46
854 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 40
853 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 136
852 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital

Authors: Puntarikorn Rungrattanakasin

Abstract:

Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.

Keywords: trigger tool, warfarin, risk of bleeding, medical wards

Procedia PDF Downloads 122
851 Safe Limits Concentration of Ammonia at Work Environments through CD8 Expression in Rats

Authors: Abdul Rohim Tualeka, Erick Caravan K. Betekeneng, Ramdhoni Zuhro, Reko Triyono, M. Sahri

Abstract:

It has been widely reported incidence caused by acute and chronic effects of exposure to ammonia in the working environment in Indonesia, but ammonia concentration was found to be below the threshold value. The purpose of this study was to determine the safety limit concentration of ammonia in the working environment through the expression of CD8 as a reference for determining the threshold value of ammonia in the working environment. This research was a laboratory experimental with post test only control group design using experimental animals as subjects experiment. From homogeneity test results indicated that the weight of white rats exposed and control groups had a homogeneous variant with a significant level of p (0.701) > α (0.05). Description of the average breathing rate is 0.0013 m³/h. Average weight rats based group listed exposure is 0.1405 kg. From the calculation IRS CD8, CD8 highest score in the doses contained 0.0154, with the location of the highest dose of ammonia without any effect on the lungs of rats is 0.0154 mg/kg body weight of mice. Safe Human Dose (SHD) ammonia is 0.002 mg/kg body weight workers. The conclusion of this study is the safety limit concentration of ammonia gas in the working environment of 0,025 ppm.

Keywords: ammonia, CD8, rats, safe limits concentration

Procedia PDF Downloads 190
850 Evaluating Reliability Indices in 3 Critical Feeders at Lorestan Electric Power Distribution Company

Authors: Atefeh Pourshafie, Homayoun Bakhtiari

Abstract:

The main task of power distribution companies is to supply the power required by customers in an acceptable level of quality and reliability. Some key performance indicators for electric power distribution companies are those evaluating the continuity of supply within the network. More than other problems, power outages (due to lightning, flood, fire, earthquake, etc.) challenge economy and business. In addition, end users expect a reliable power supply. Reliability indices are evaluated on an annual basis by the specialized holding company of Tavanir (Power Produce, Transmission& distribution company of Iran) . Evaluation of reliability indices is essential for distribution companies, and with regard to the privatization of distribution companies, it will be of particular importance to evaluate these indices and to plan for their improvement in a not too distant future. According to IEEE-1366 standard, there are too many indices; however, the most common reliability indices include SAIFI, SAIDI and CAIDI. These indices describe the period and frequency of blackouts in the reporting period (annual or any desired timeframe). This paper calculates reliability indices for three sample feeders in Lorestan Electric Power Distribution Company and defines the threshold values in a ten-month period. At the end, strategies are introduced to reach the threshold values in order to increase customers' satisfaction.

Keywords: power, distribution network, reliability, outage

Procedia PDF Downloads 445
849 5-[Aryloxypyridyl (or Nitrophenyl)]-4H-1,2,4-Triazoles as Flexible Benzodiazepine Analogs: Synthesis, Receptor Binding Affinity and the Lipophilicity-Dependent Anti-Seizure Onset of Action

Authors: Latifeh Navidpour, Shabnam Shabani, Alireza Heidari, Manouchehr Bashiri, Azadeh Ebrahim-Habibi, Soraya Shahhosseini, Hamed Shafaroodi, Sayyed Abbas Tabatabai, Mahsa Toolabi

Abstract:

A new series of 5-(2-aryloxy-4-nitrophenyl)-4H-1,2,4-triazoles and 5-(2-aryloxy-3-pyridyl)-4H-1,2,4-triazoles, possessing C-3 thio or alkylthio substituents, was synthesized and evaluated for their benzodiazepine receptor affinity and anti-seizure activity. These analogues revealed similar to significantly superior affinity to GABAA/ benzodiazepine receptor complex (IC50 values of 0.04–4.1 nM), relative to diazepam as the reference drug (IC50 value of 2.4 nM). To determine the onset of anti-seizure activity, the time-dependent effectiveness of i.p. administration of compounds on pentylenetetrazole induced seizure threshold was studied and a very good relationship was observed between the lipophilicity (cLogP) and onset of action of studied analogues (r2 = 0.964). The minimum effective dose of the compounds, determined at the time the analogues showed their highest activity, was demonstrated to be 0.025–0.1 mg/kg, relative to diazepam (0.025 mg/kg).

Keywords: 1, 2, 4-triazole, flexible benzodiazepines, GABAA/bezodiazepine receptor complex, onset of action, PTZ induced seizure threshold

Procedia PDF Downloads 76
848 The Long-Run Impact of Financial Development on Greenhouse Gas Emissions in India: An Application of Regime Shift Based Cointegration Approach

Authors: Javaid Ahmad Dar, Mohammad Asif

Abstract:

The present study investigates the long-run impact of financial development, energy consumption and economic growth on greenhouse gas emissions for India, in presence of endogenous structural breaks, over a period of 1971-2013. Autoregressive distributed lag bounds testing procedure and Hatemi-J threshold cointegration technique have been used to test the variables for cointegration. ARDL bounds test did not confirm any cointegrating relationship between the variables. The threshold cointegration test establishes the presence of long-run impact of financial development, energy use and economic growth on greenhouse gas emissions in India. The results reveal that the long-run relationship between the variables has witnessed two regime shifts, in 1978 and 2002. The empirical evidence shows that financial sector development and energy consumption in India degrade environment. Unlike previous studies, this paper finds no statistical evidence of long-run relationship between economic growth and environmental deterioration. The study also challenges the existence of environmental Kuznets curve in India.

Keywords: cointegration, financial development, global warming, greenhouse gas emissions, regime shift, unit root

Procedia PDF Downloads 359
847 Application of Simulated Annealing to Threshold Optimization in Distributed OS-CFAR System

Authors: L. Abdou, O. Taibaoui, A. Moumen, A. Talib Ahmed

Abstract:

This paper proposes an application of the simulated annealing to optimize the detection threshold in an ordered statistics constant false alarm rate (OS-CFAR) system. Using conventional optimization methods, such as the conjugate gradient, can lead to a local optimum and lose the global optimum. Also for a system with a number of sensors that is greater than or equal to three, it is difficult or impossible to find this optimum; Hence, the need to use other methods, such as meta-heuristics. From a variety of meta-heuristic techniques, we can find the simulated annealing (SA) method, inspired from a process used in metallurgy. This technique is based on the selection of an initial solution and the generation of a near solution randomly, in order to improve the criterion to optimize. In this work, two parameters will be subject to such optimisation and which are the statistical order (k) and the scaling factor (T). Two fusion rules; “AND” and “OR” were considered in the case where the signals are independent from sensor to sensor. The results showed that the application of the proposed method to the problem of optimisation in a distributed system is efficiency to resolve such problems. The advantage of this method is that it allows to browse the entire solutions space and to avoid theoretically the stagnation of the optimization process in an area of local minimum.

Keywords: distributed system, OS-CFAR system, independent sensors, simulating annealing

Procedia PDF Downloads 481
846 Developing Artificial Neural Networks (ANN) for Falls Detection

Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai

Abstract:

The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.

Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold

Procedia PDF Downloads 471
845 Detecting and Thwarting Interest Flooding Attack in Information Centric Network

Authors: Vimala Rani P, Narasimha Malikarjunan, Mercy Shalinie S

Abstract:

Data Networking was brought forth as an instantiation of information-centric networking. The attackers can send a colossal number of spoofs to take hold of the Pending Interest Table (PIT) named an Interest Flooding attack (IFA) since the in- interests are recorded in the PITs of the intermediate routers until they receive corresponding Data Packets are go beyond the time limit. These attacks can be detrimental to network performance. PIT expiration rate or the Interest satisfaction rate, which cannot differentiate the IFA from attacks, is the criterion Traditional IFA detection techniques are concerned with. Threshold values can casually affect Threshold-based traditional methods. This article proposes an accurate IFA detection mechanism based on a Multiple Feature-based Extreme Learning Machine (MF-ELM). Accuracy of the attack detection can be increased by presenting the entropy of Internet names, Interest satisfaction rate and PIT usage as features extracted in the MF-ELM classifier. Furthermore, we deploy a queue-based hostile Interest prefix mitigation mechanism. The inference of this real-time test bed is that the mechanism can help the network to resist IFA with higher accuracy and efficiency.

Keywords: information-centric network, pending interest table, interest flooding attack, MF-ELM classifier, queue-based mitigation strategy

Procedia PDF Downloads 181
844 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: breakage, computer vision, husking, rice kernel

Procedia PDF Downloads 344
843 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 175
842 Institutional Quality and Tax Compliance: A Cross-Country Regression Evidence

Authors: Debi Konukcu Onal, Tarkan Cavusoglu

Abstract:

In modern societies, the costs of public goods and services are shared through taxes paid by citizens. However, taxation has always been a frictional issue, as tax obligations are perceived to be a financial burden for taxpayers rather than being merit that fulfills the redistribution, regulation and stabilization functions of the welfare state. The tax compliance literature evolves into discussing why people still pay taxes in systems with low costs of legal enforcement. Related empirical and theoretical works show that a wide range of socially oriented behavioral factors can stimulate voluntary compliance and subversive effects as well. These behavioral motivations are argued to be driven by self-enforcing rules of informal institutions, either independently or through interactions with legal orders set by formal institutions. The main focus of this study is to investigate empirically whether institutional particularities have a significant role in explaining the cross-country differences in the tax noncompliance levels. A part of the controversy about the driving forces behind tax noncompliance may be attributed to the lack of empirical evidence. Thus, this study aims to fill this gap through regression estimates, which help to trace the link between institutional quality and noncompliance on a cross-country basis. Tax evasion estimates of Buehn and Schneider is used as the proxy measure for the tax noncompliance levels. Institutional quality is quantified by three different indicators (percentile ranks of Worldwide Governance Indicators, ratings of the International Country Risk Guide, and the country ratings of the Freedom in the World). Robust Least Squares and Threshold Regression estimates based on the sample of the Organization for Economic Co-operation and Development (OECD) countries imply that tax compliance increases with institutional quality. Moreover, a threshold-based asymmetry is detected in the effect of institutional quality on tax noncompliance. That is, the negative effects of tax burdens on compliance are found to be more pronounced in countries with institutional quality below a certain threshold. These findings are robust to all alternative indicators of institutional quality, supporting the significant interaction of societal values with the individual taxpayer decisions.

Keywords: institutional quality, OECD economies, tax compliance, tax evasion

Procedia PDF Downloads 107
841 Dynamic Distribution Calibration for Improved Few-Shot Image Classification

Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran

Abstract:

Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.

Keywords: deep learning, computer vision, image classification, few-shot learning, threshold

Procedia PDF Downloads 36
840 Numerical Study Pile Installation Disturbance Zone Effects on Excess Pore Pressure Dissipation

Authors: Kang Liu, Meng Liu, Meng-Long Wu, Da-Chang Yue, Hong-Yi Pan

Abstract:

The soil setup is an important factor affecting pile bearing capacity; there are many factors that influence it, all of which are closely related to pile construction disturbances. During pile installation in soil, a significant amount of excess pore pressure is generated, creating disturbance zones around the pile. The dissipation rate of excess pore pressure is an important factor influencing the pile setup. The paper aims to examine how alterations in parameters within disturbance zones affect the dissipation of excess pore pressure. An axisymmetric FE model is used to simulate pile installation in clay, subsequently consolidation using Plaxis 3D. The influence of disturbed zone on setup is verified, by comparing the parametric studies in uniform field and non-uniform field. Three types of consolidation are employed: consolidation in three directions, vertical consolidation, horizontal consolidation. The results of the parametric study show that the permeability coefficient decreases, soil stiffness decreases, and reference pressure increases in the disturbance zone, resulting in an increase in the dissipation time of excess pore pressure and exhibiting a noticeable threshold phenomenon, which has been commonly overlooked in previous literature. The research in this paper suggests that significant thresholds occur when the coefficient of permeability decreases to half of the original site's value for three-directional and horizontal consolidation within the disturbed zone. Similarly, the threshold for vertical consolidation is observed when the coefficient of permeability decreases to one-fourth of the original site's value. Especially in pile setup research, consolidation is assumed to be horizontal; the study findings suggest that horizontal consolidation has experienced notable alterations as a result of the presence of disturbed zones. Furthermore, the selection of pile installation methods proves to be critical. A nonlinearity excess pore pressure formula is proposed based on cavity expansion theory, which includes the distribution of soil profile modulus with depth.

Keywords: pile setup, threshold value effect, installation effects, uniform field, non-uniform field

Procedia PDF Downloads 11
839 Effects of Duct Geometry, Thickness and Types of Liners on Transmission Loss for Absorptive Silencers

Authors: M. Kashfi, K. Jahani

Abstract:

Sound attenuation in absorptive silencers has been analyzed in this paper. The structure of such devices is as follows. When the rigid duct of an expansion chamber has been lined by a packed absorptive material under a perforated membrane, incident sound waves will be dissipated by the absorptive liners. This kind of silencer, usually are applicable for medium to high frequency ranges. Several conditions for different absorptive materials, variety in their thicknesses, and different shapes of the expansion chambers have been studied in this paper. Also, graphs of sound attenuation have been compared between empty expansion chamber and duct of silencer with applying liner. Plane waves have been assumed in inlet and outlet regions of the silencer. Presented results that have been achieved by applying finite element method (FEM), have shown the dependence of the sound attenuation spectrum to flow resistivity and the thicknesses of the absorptive materials, and geometries of the cross section (configuration of the silencer). As flow resistivity and thickness of absorptive materials increase, sound attenuation improves. In this paper, diagrams of the transmission loss (TL) for absorptive silencers in five different cross sections (rectangle, circle, ellipse, square, and rounded rectangle as the main geometry) have been presented. Also, TL graphs for silencers using different absorptive material (glass wool, wood fiber, and kind of spongy materials) as liner with three different thicknesses of 5 mm, 15 mm, and 30 mm for glass wool liner have been exhibited. At first, the effect of substances of the absorptive materials with the specific flow resistivity and densities on the TL spectrum, then the effect of the thicknesses of the glass wool, and at last the efficacy of the shape of the cross section of the silencer have been investigated.

Keywords: transmission loss, absorptive material, flow resistivity, thickness, frequency

Procedia PDF Downloads 226
838 High Precision 65nm CMOS Rectifier for Energy Harvesting using Threshold Voltage Minimization in Telemedicine Embedded System

Authors: Hafez Fouad

Abstract:

Telemedicine applications have very low voltage which required High Precision Rectifier Design with high Sensitivity to operate at minimum input Voltage. In this work, we targeted 0.2V input voltage using 65 nm CMOS rectifier for Energy Harvesting Telemedicine application. The proposed rectifier which designed at 2.4GHz using two-stage structure found to perform in a better case where minimum operation voltage is lower than previous published paper and the rectifier can work at a wide range of low input voltage amplitude. The Performance Summary of Full-wave fully gate cross-coupled rectifiers (FWFR) CMOS Rectifier at F = 2.4 GHz: The minimum and maximum output voltages generated using an input voltage amplitude of 2 V are 490.9 mV and 1.997 V, maximum VCE = 99.85 % and maximum PCE = 46.86 %. The Performance Summary of Differential drive CMOS rectifier with external bootstrapping circuit rectifier at F = 2.4 GHz: The minimum and maximum output voltages generated using an input voltage amplitude of 2V are 265.5 mV (0.265V) and 1.467 V respectively, maximum VCE = 93.9 % and maximum PCE= 15.8 %.

Keywords: energy harvesting, embedded system, IoT telemedicine system, threshold voltage minimization, differential drive cmos rectifier, full-wave fully gate cross-coupled rectifiers CMOS rectifier

Procedia PDF Downloads 123
837 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 239