Search results for: preposition error detection
1233 Severity Index Level in Effectively Managing Medium Voltage Underground Power Cable
Authors: Mohd Azraei Pangah Pa'at, Mohd Ruzlin Mohd Mokhtar, Norhidayu Rameli, Tashia Marie Anthony, Huzainie Shafi Abd Halim
Abstract:
Partial Discharge (PD) diagnostic mapping testing is one of the main diagnostic testing techniques that are widely used in the field or onsite testing for underground power cable in medium voltage level. The existence of PD activities is an early indication of insulation weakness hence early detection of PD activities can be determined and provides an initial prediction on the condition of the cable. To effectively manage the results of PD Mapping test, it is important to have acceptable criteria to facilitate prioritization of mitigation action. Tenaga Nasional Berhad (TNB) through Distribution Network (DN) division have developed PD severity model name Severity Index (SI) for offline PD mapping test since 2007 based on onsite test experience. However, this severity index recommendation action had never been revised since its establishment. At presence, PD measurements data have been extensively increased, hence the severity level indication and the effectiveness of the recommendation actions can be analyzed and verified again. Based on the new revision, the recommended action to be taken will be able to reflect the actual defect condition. Hence, will be accurately prioritizing preventive action plan and minimizing maintenance expenditure.Keywords: partial discharge, severity index, diagnostic testing, medium voltage, power cable
Procedia PDF Downloads 1901232 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis
Procedia PDF Downloads 6541231 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation
Authors: Abdal-Hafeez Alhussein
Abstract:
Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.Keywords: artificial intelligence, information technology, automation, scalability
Procedia PDF Downloads 211230 Fuzzy Inference-Assisted Saliency-Aware Convolution Neural Networks for Multi-View Summarization
Authors: Tanveer Hussain, Khan Muhammad, Amin Ullah, Mi Young Lee, Sung Wook Baik
Abstract:
The Big Data generated from distributed vision sensors installed on large scale in smart cities create hurdles in its efficient and beneficial exploration for browsing, retrieval, and indexing. This paper presents a three-folded framework for effective video summarization of such data and provide a compact and representative format of Big Video Data. In the first fold, the paper acquires input video data from the installed cameras and collect clues such as type and count of objects and clarity of the view from a chunk of pre-defined number of frames of each view. The decision of representative view selection for a particular interval is based on fuzzy inference system, acquiring a precise and human resembling decision, reinforced by the known clues as a part of the second fold. In the third fold, the paper forwards the selected view frames to the summary generation mechanism that is supported by a saliency-aware convolution neural network (CNN) model. The new trend of fuzzy rules for view selection followed by CNN architecture for saliency computation makes the multi-view video summarization (MVS) framework a suitable candidate for real-world practice in smart cities.Keywords: big video data analysis, fuzzy logic, multi-view video summarization, saliency detection
Procedia PDF Downloads 1931229 Crop Classification using Unmanned Aerial Vehicle Images
Authors: Iqra Yaseen
Abstract:
One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.Keywords: image processing, UAV, YOLO, CNN, deep learning, classification
Procedia PDF Downloads 1151228 Odor-Color Association Stroop-Task and the Importance of an Odorant in an Odor-Imagery Task
Authors: Jonathan Ham, Christopher Koch
Abstract:
There are consistently observed associations between certain odors and colors, and there is an association between the ability to imagine vivid visual objects and imagine vivid odors. However, little has been done to investigate how the associations between odors and visual information effect visual processes. This study seeks to understand the relationship between odor imaging, color associations, and visual attention by utilizing a Stroop-task based on common odor-color associations. This Stroop-task was designed using three fruits with distinct odors that are associated with the color of the fruit: lime with green, strawberry with red, and lemon with yellow. Each possible word-color combination was presented in the experimental trials. When the word matched the associated color (lime written in green) it was considered congruent; if it did not, it was considered incongruent (lime written in red or yellow). In experiment I (n = 34) participants were asked to both imagine the odor of the fruit on the screen and identify which fruit it was, and each word-color combination was presented 20 times (a total of 180 trials, with 60 congruent and 120 incongruent instances). Response time and error rate of the participant responses were recorded. There was no significant difference in either measure between the congruent and incongruent trials. In experiment II participants (n = 18) followed the identical procedure as in the previous experiment with the addition of an odorant in the room. The odorant (orange) was not the fruit or color used in the experimental trials. With a fruit-based odorant in the room, the response times (measured in milliseconds) between congruent and incongruent trials were significantly different, with incongruent trials (M = 755.919, SD = 239.854) having significantly longer response times than congruent trials (M = 690.626, SD = 198.822), t (1, 17) = 4.154, p < 0.01. This suggests that odor imagery does affect visual attention to colors, and the ability to inhibit odor-color associations; however, odor imagery is difficult and appears to be facilitated in the presence of a related odorant.Keywords: odor-color associations, odor imagery, visual attention, inhibition
Procedia PDF Downloads 1801227 Ground Deformation Module for the New Laboratory Methods
Authors: O. Giorgishvili
Abstract:
For calculation of foundations one of the important characteristics is the module of deformation (E0). As we all know, the main goal of calculation of the foundations of buildings on deformation is to arrange the base settling and difference in settlings in such limits that do not cause origination of cracks and changes in design levels that will be dangerous to standard operation in the buildings and their individual structures. As is known from the literature and the practical application, the modulus of deformation is determined by two basic methods: laboratory method, soil test on compression (without the side widening) and soil test in field conditions. As we know, the deformation modulus of soil determined by field method is closer to the actual modulus deformation of soil, but the complexity of the tests to be carried out and the financial concerns did not allow determination of ground deformation modulus by field method. Therefore, we determine the ground modulus of deformation by compression method without side widening. Concerning this, we introduce a new way for determination of ground modulus of deformation by laboratory order that occurs by side widening and more accurately reflects the ground modulus of deformation and more accurately reflects the actual modulus of deformation and closer to the modulus of deformation determined by the field method. In this regard, we bring a new approach on the ground deformation detection laboratory module, which is done by widening sides. The tests and the results showed that the proposed method of ground deformation modulus is closer to the results that are obtained in the field, which reflects the foundation's work in real terms more accurately than the compression of the ground deformation module.Keywords: build, deformation modulus, foundations, ground, laboratory research
Procedia PDF Downloads 3721226 Intelligent Chemistry Approach to Improvement of Oxygenates Analytical Method in Light Hydrocarbon by Multidimensional Gas Chromatography - FID and MS
Authors: Ahmed Aboforn
Abstract:
Butene-1 product is consider effectively raw material in Polyethylene production, however Oxygenates impurities existing will be effected ethylene/butene-1 copolymers synthesized through titanium-magnesium-supported Ziegler-Natta catalysts. Laterally, Petrochemical industries are challenge against poor quality of Butene-1 and other C4 mix – feedstock that reflected on business impact and production losing. In addition, propylene product suffering from contamination by oxygenates components and causing for lose production and plant upset of Polypropylene process plants. However, Multidimensional gas chromatography (MDGC) innovative analytical methodology is a chromatography technique used to separate complex samples, as mixing different functional group as Hydrocarbon and oxygenates compounds and have similar retention factors, by running the eluent through two or more columns instead of the customary single column. This analytical study striving to enhance the quality of Oxygenates analytical method, as monitoring the concentration of oxygenates with accurate and precise analytical method by utilizing multidimensional GC supported by Backflush technique and Flame Ionization Detector, which have high performance separation of hydrocarbon and Oxygenates; also improving the minimum detection limits (MDL) to detect the concentration <1.0 ppm. However different types of oxygenates as (Alcohols, Aldehyde, Ketones, Ester and Ether) may be determined in other Hydrocarbon streams asC3, C4-mix, until C12 mixture, supported by liquid injection auto-sampler.Keywords: analytical chemistry, gas chromatography, petrochemicals, oxygenates
Procedia PDF Downloads 861225 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4751224 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction
Authors: Khaled Barkaoui
Abstract:
Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.Keywords: second language writing, Fluency, accuracy, complexity, longitudinal
Procedia PDF Downloads 1561223 Phthalates Exposure in Children with Central Precocious Puberty (CPP) or Constitutional Delays in Growth
Authors: Yen-An Tsai, Ching-Ling Lin, Jia-Woei Hou, Mei-Lien Chen
Abstract:
Endocrine-disrupting chemicals (EDCs) adversely affect the endocrine system. Phthalates, also called phthalic acid esters (PAEs), are manmade chemicals that are used as stabilizing agents in personal care products such as perfumes, lotions, and cosmetics. The aim was to explore whether PAEs exposure was associated with central precocious puberty (CPP) or constitutional delays in growth (CDGP). This case-control study included 48 female with CPP, 37 male with constitutional delays in growth, and 127 normal children and was conducted from December 2011 to August 2014. All participants completed a structured questionnaire regarding socio-demographic characteristics, lifestyle, and secondary sexual characteristics. The analytical method was based on ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) with isotope dilution for the quantitative detection of several phthalate metabolites in human urine. The risk of CPP with mep, mnbp, LMW >50th percentile were higher than those with 50th percentile were higher than those with <50 percentile in model 2. In model 1, we only found higher CDGP risk in mep, mnbp, and ΣPAEs. It shows that high phthalate exposure may associate with CDGP. In this case-control study, we found PAEs exposure was associated with central precocious puberty (CPP) or constitutional delays in growth.Keywords: phthalates, puberty, delays, growth
Procedia PDF Downloads 1821222 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow
Authors: Musa Akdere, Gunnar Seide, Thomas Gries
Abstract:
Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface
Procedia PDF Downloads 2791221 Corruption, Institutional Quality and Economic Growth in Nigeria
Authors: Ogunlana Olarewaju Fatai, Kelani Fatai Adeshina
Abstract:
The interplay of corruption and institutional quality determines how effective and efficient an economy progresses. An efficient institutional quality is a key requirement for economic stability. Institutional quality in most cases has been used interchangeably with Governance and these have given room for proxies that legitimized Governance as measures for institutional quality. A poorly-tailored institutional quality has a penalizing effect on corruption and economic growth, while defective institutional quality breeds corruption. Corruption is a hydra-headed phenomenon as it manifests in different forms. The most celebrated definition of corruption is given as “the use or abuse of public office for private benefits or gains”. It also denotes an arrangement between two mutual parties in the determination and allocation of state resources for pecuniary benefits to circumvent state efficiency. This study employed Barro (1990) type augmented model to analyze the nexus among corruption, institutional quality and economic growth in Nigeria using annual time series data, which spanned the period 1996-2019. Within the analytical framework of Johansen Cointegration technique, Error Correction Mechanism (ECM) and Granger Causality tests, findings revealed a long-run relationship between economic growth, corruption and selected measures of institutional quality. The long run results suggested that all the measures of institutional quality except voice & accountability and regulatory quality are positively disposed to economic growth. Moreover, the short-run estimation indicated a reconciliation of the divergent views on corruption which pointed at “sand the wheel” and “grease the wheel” of growth. In addition, regulatory quality and the rule of law indicated a negative influence on economic growth in Nigeria. Government effectiveness and voice & accountability, however, indicated a positive influence on economic growth. The Granger causality test results suggested a one-way causality between GDP and Corruption and also between corruption and institutional quality. Policy implications from this study pointed at checking corruption and streamlining institutional quality framework for better and sustained economic development.Keywords: institutional quality, corruption, economic growth, public policy
Procedia PDF Downloads 1761220 Evaluation of Medication Errors in Outpatient Pharmacies: Electronic Prescription System vs. Paper System
Authors: Mera Ababneh, Sayer Al-Azzam, Karem Alzoubi, Abeer Rababa'h
Abstract:
Background: Medication errors are among the most common medical errors. Their occurrences result in patient’s mortality, morbidity, and additional healthcare costs. Continuous monitoring and detection is required. Objectives: The aim of this study was to compare medication errors in outpatient’s prescriptions in two different hospitals (paper system vs. electronic system). Methods: This was a cross sectional observational study conducted in two major hospitals; King Abdullah University Hospital (KAUH) and Princess Bassma Teaching Hospital (PBTH) over three months period. Data collection was conducted by two trained pharmacists at each site. During the study period, medication prescriptions and dispensing procedures were screened for medication errors in both participating centers by two trained pharmacist. Results: In the electronic prescription hospital, 2500 prescriptions were screened in which 631 medication errors were detected. Prescription errors were 231 (36.6%), and dispensing errors were 400 (63.4%) of all errors. On the other side, analysis of 2500 prescriptions in paper-based hospital revealed 3714 medication errors, of which 288 (7.8%) were prescription errors, and 3426 (92.2%) were dispensing errors. A significant number of 2496 (67.2%) were inadequately and/or inappropriately labeled. Conclusion: This study provides insight for healthcare policy makers, professionals, and administrators to invest in advanced technology systems, education, and epidemiological surveillance programs to minimize medication errors.Keywords: medication errors, prescription errors, dispensing errors, electronic prescription, handwritten prescription
Procedia PDF Downloads 2851219 Modern Seismic Design Approach for Buildings with Hysteretic Dampers
Authors: Vanessa A. Segovia, Sonia E. Ruiz
Abstract:
The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) A main elastic structural frame designed for service loads and 2) A secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: A) The stiffness ratio (α=K_frame/K_(total system)), and B) The strength ratio (γ= V_damper / V_(total system)). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: Serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of α and γ. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.Keywords: damage-controlled buildings, direct displacement-based seismic design, optimal hysteretic energy dissipation systems, hysteretic dampers
Procedia PDF Downloads 4861218 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources
Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin
Abstract:
Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization
Procedia PDF Downloads 1251217 The Challenge of Characterising Drought Risk in Data Scarce Regions: The Case of the South of Angola
Authors: Natalia Limones, Javier Marzo, Marcus Wijnen, Aleix Serrat-Capdevila
Abstract:
In this research we developed a structured approach for the detection of areas under the highest levels of drought risk that is suitable for data-scarce environments. The methodology is based on recent scientific outcomes and methods and can be easily adapted to different contexts in successive exercises. The research reviews the history of drought in the south of Angola and characterizes the experienced hazard in the episode from 2012, focusing on the meteorological and the hydrological drought types. Only global open data information coming from modeling or remote sensing was used for the description of the hydroclimatological variables since there is almost no ground data in this part of the country. Also, the study intends to portray the socioeconomic vulnerabilities and the exposure to the phenomenon in the region to fully understand the risk. As a result, a map of the areas under the highest risk in the south of the country is produced, which is one of the main outputs of this work. It was also possible to confirm that the set of indicators used revealed different drought vulnerability profiles in the South of Angola and, as a result, several varieties of priority areas prone to distinctive impacts were recognized. The results demonstrated that most of the region experienced a severe multi-year meteorological drought that triggered an unprecedent exhaustion of the surface water resources, and that the majority of their socioeconomic impacts started soon after the identified onset of these processes.Keywords: drought risk, exposure, hazard, vulnerability
Procedia PDF Downloads 1961216 Cartel's Little Helpers: A Comparative Study of the Case Law Regarding the Facilitators of Collusion in Latin America Competition Law and Policy
Authors: Andres Calderon
Abstract:
In order to avoid detection and punishment, cartels have recruited the help of third parties to organize, execute and disguise the anticompetitive practices cartel members have agreed upon. These third parties may take the form of consultancy firms, guilds or professional advisors that do not perform an economic activity in the market where the collusion takes place. This paper takes a look into how national competition authorities and national legislators have dealt with the emergence of the cartels’ facilitators in Latin America. Following the practice of other jurisdictions such as United States (Toys R' Us, Apple), European Union (AC Treuhand), United Kingdom (Replica Kits, Hasbro) and Spain (Urban, Snap-On), some countries (e.g. Argentina, Chile) in Latin America have started to conduct investigations and find antitrust liability in cartels’ facilitators for helping others to violate their national competition laws. Some countries (e.g. Peru and Colombia) have also amended their legislation to amplify the subjective scope of application in order to include cartels’ facilitators. The Latin American case is one of special relevance because public officials are often prone to promote or indulge agreements between competitors in sectors of political interest. A broad definition of cartels’ facilitator, consequently, could lead to the prosecution of punishment of public officials that may hinder the competitive process.Keywords: anticompetitive practices, cartel, collusion, competition, facilitator, hub and spoke
Procedia PDF Downloads 1691215 Encoded Fiber Optic Sensors for Simultaneous Multipoint Sensing
Authors: C. Babu Rao, Pandian Chelliah
Abstract:
Owing to their reliability, a number of fluorescent spectra based fiber optic sensors have been developed for detection and identification of hazardous chemicals such as explosives, narcotics etc. In High security regions, such as airports, it is important to monitor simultaneously multiple locations. This calls for deployment of a portable sensor at each location. However, the selectivity and sensitivity of these techniques depends on the spectral resolution of the spectral analyzer. The better the resolution the larger the repertoire of chemicals that can be detected. A portable unit will have limitations in meeting these requirements. Optical fibers can be employed for collecting and transmitting spectral signal from the portable sensor head to a sensitive central spectral analyzer (CSA). For multipoint sensing, optical multiplexing of multiple sensor heads with CSA has to be adopted. However with multiplexing, when one sensor head is connected to CSA, the rest may remain unconnected for the turn-around period. The larger the number of sensor heads the larger this turn-around time will be. To circumvent this imitation, we propose in this paper, an optical encoding methodology to use multiple portable sensor heads connected to a single CSA. Each portable sensor head is assigned an unique address. Spectra of every chemical detected through this sensor head, are encoded by its unique address and can be identified at the CSA end. The methodology proposed is demonstrated through a simulation using Matlab SIMULINK.Keywords: optical encoding, fluorescence, multipoint sensing
Procedia PDF Downloads 7121214 Sider Bee Honey: Antitumor Effect in Some Experimental Tumor Cell Lines
Authors: Aliaa M. Issa, Mahmoud N. ElRouby, Sahar A. S. Ahmad, Mahmoud M. El-Merzabani
Abstract:
Sider honey is a type of honey produced by bees feeding on the nectar of Sider tree, Ziziphus spina-christi (L) Desf . Honey is an effective agent for preventing, inhibiting and treating the growth of human and animal cancer cell lines in vitro and in vivo. The aim of the present study was to evaluate the impact of different dilutions from crude Sider honey and different duration times of exposure on the growth of six tumor cell lines (human cervical cancer cell line, HeLa; human hepatocellular carcinoma cell line, HepG-2; human larynx carcinoma cell line, Hep-2; brain tumor cell line, U251) as well as one animal cancerous cell line (Ehrlich ascites carcinoma cells line, EAC) and one normal cell line, Homo sapiens, human, (WISH) CCL-25. Different concentrations and treatment durations with Sider honey were tested on the growth of several cancer cell lines types. Histopathological changes in the tumor masses, animal survival, apoptosis and necrosis of the used cancer cell lines (using flow cytometry) were evaluated. Sider honey was administers either to the tumor mass itself by intratumoral injection or via drinking water. One-way ANOVA test was used for the analysis of (the means + standard error) of the optical density obtained from the Elisa reader and flow cytometry. The study revealed that different concentrations of Sider honey affected the growth patterns of all the studied cancer cell lines as well as their histopathological changes, and it depended on the cell line nature and the concentration of honey used. It is obvious that the relative animal survival percentage (bearing Ehrlich ascites carcinoma, EAC cells) was proportionally increased with the increase in the used honey concentrations. The study of apoptosis and necrosis using the flow cytometry technique emphasized the viability results. In conclusion, Sider honey was effective as antitumor agent, in the used concentrations.Keywords: antitumor, honey, sider, tumor cell lines
Procedia PDF Downloads 5401213 I Don’t Want to Have to Wait: A Study Into the Origins of Rule Violations at Rail Pedestrian Level Crossings
Authors: James Freeman, Andry Rakotonirainy
Abstract:
Train pedestrian collisions are common and are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. However, there is limited research that has focused on understanding the reasons why some pedestrians’ break level crossings rules, which limits the development of effective countermeasures. As a result, this study undertook a deeper exploration into the origins of risky pedestrian behaviour through structured interviews. A total of 40 pedestrians who admitted to either intentionally breaking crossing rules or making crossing errors participated in an in-depth telephone interview. Qualitative analysis was undertaken via thematic analysis that revealed participants were more likely to report deliberately breaking rules (rather than make errors), particular after the train had passed the crossing as compared to before it arrives. Predominant reasons for such behaviours were identified to be: calculated risk taking, impatience, poor knowledge of rules and low likelihood of detection. The findings have direct implications for the development of effective countermeasures to improve crossing safety (and managing risk) such as increasing surveillance and transit officer presence, as well as installing appropriate barriers that either deter or incapacitate pedestrians from violating crossing rules. This paper will further outline the study findings in regards to the development of countermeasures as well as provide direction for future research efforts in this area.Keywords: crossings, mistakes, risk, violations
Procedia PDF Downloads 4161212 Biosensor Design through Molecular Dynamics Simulation
Authors: Wenjun Zhang, Yunqing Du, Steven W. Cranford, Ming L. Wang
Abstract:
The beginning of 21st century has witnessed new advancements in the design and use of new materials for biosensing applications, from nano to macro, protein to tissue. Traditional analytical methods lack a complete toolset to describe the complexities introduced by living systems, pathological relations, discrete hierarchical materials, cross-phase interactions, and structure-property dependencies. Materiomics – via systematic molecular dynamics (MD) simulation – can provide structure-process-property relations by using a materials science approach linking mechanisms across scales and enables oriented biosensor design. With this approach, DNA biosensors can be utilized to detect disease biomarkers present in individuals’ breath such as acetone for diabetes. Our wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) has successfully detected trace amount of various chemicals in vapor differentiated by pattern recognition. Here, we present how MD simulation can revolutionize the way of design and screening of DNA aptamers for targeting biomarkers related to oral diseases and oral health monitoring. It demonstrates great potential to be utilized to build a library of DNDA sequences for reliable detection of several biomarkers of one specific disease, and as well provides a new methodology of creating, designing, and applying of biosensors.Keywords: biosensor, DNA, biomarker, molecular dynamics simulation
Procedia PDF Downloads 4701211 Dynamic Modeling of Advanced Wastewater Treatment Plants Using BioWin
Authors: Komal Rathore, Aydin Sunol, Gita Iranipour, Luke Mulford
Abstract:
Advanced wastewater treatment plants have complex biological kinetics, time variant influent flow rates and long processing times. Due to these factors, the modeling and operational control of advanced wastewater treatment plants become complicated. However, development of a robust model for advanced wastewater treatment plants has become necessary in order to increase the efficiency of the plants, reduce energy costs and meet the discharge limits set by the government. A dynamic model was designed using the Envirosim (Canada) platform software called BioWin for several wastewater treatment plants in Hillsborough County, Florida. Proper control strategies for various parameters such as mixed liquor suspended solids, recycle activated sludge and waste activated sludge were developed for models to match the plant performance. The models were tuned using both the influent and effluent data from the plant and their laboratories. The plant SCADA was used to predict the influent wastewater rates and concentration profiles as a function of time. The kinetic parameters were tuned based on sensitivity analysis and trial and error methods. The dynamic models were validated by using experimental data for influent and effluent parameters. The dissolved oxygen measurements were taken to validate the model by coupling them with Computational Fluid Dynamics (CFD) models. The Biowin models were able to exactly mimic the plant performance and predict effluent behavior for extended periods. The models are useful for plant engineers and operators as they can take decisions beforehand by predicting the plant performance with the use of BioWin models. One of the important findings from the model was the effects of recycle and wastage ratios on the mixed liquor suspended solids. The model was also useful in determining the significant kinetic parameters for biological wastewater treatment systems.Keywords: BioWin, kinetic modeling, flowsheet simulation, dynamic modeling
Procedia PDF Downloads 1571210 Automated Digital Mammogram Segmentation Using Dispersed Region Growing and Pectoral Muscle Sliding Window Algorithm
Authors: Ayush Shrivastava, Arpit Chaudhary, Devang Kulshreshtha, Vibhav Prakash Singh, Rajeev Srivastava
Abstract:
Early diagnosis of breast cancer can improve the survival rate by detecting cancer at an early stage. Breast region segmentation is an essential step in the analysis of digital mammograms. Accurate image segmentation leads to better detection of cancer. It aims at separating out Region of Interest (ROI) from rest of the image. The procedure begins with removal of labels, annotations and tags from the mammographic image using morphological opening method. Pectoral Muscle Sliding Window Algorithm (PMSWA) is used for removal of pectoral muscle from mammograms which is necessary as the intensity values of pectoral muscles are similar to that of ROI which makes it difficult to separate out. After removing the pectoral muscle, Dispersed Region Growing Algorithm (DRGA) is used for segmentation of mammogram which disperses seeds in different regions instead of a single bright region. To demonstrate the validity of our segmentation method, 322 mammographic images from Mammographic Image Analysis Society (MIAS) database are used. The dataset contains medio-lateral oblique (MLO) view of mammograms. Experimental results on MIAS dataset show the effectiveness of our proposed method.Keywords: CAD, dispersed region growing algorithm (DRGA), image segmentation, mammography, pectoral muscle sliding window algorithm (PMSWA)
Procedia PDF Downloads 3171209 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide
Authors: Sanaz Seraj, Shohre Rouhani
Abstract:
Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.Keywords: fluorescence, graphene oxide, naphthalimide dye, quenching
Procedia PDF Downloads 5931208 Profiles of Physical Fitness and Enjoyment among Children: Associations with Sport Participation
Authors: Norjali Wazir M. R. W., Pion P., Mostaert M., De Meester A., Lenoir M., Bardid F.
Abstract:
Background and study aim: Most of the people assume that someone will perform well on something they like. A tool evaluating how much an individual likes an activity can also be guidance for talent detection and to keep youngster doing what they like as a recreational sport. The purpose of this study was to identify the relationship between physical performances with something that they like. Material and methods: In this cross-sectional study, 558 pupils age between 8 years to 11 years were tested using test battery containing 7 physical performance tests (I Do) compared to a pictorial scale containing 7 pictures (I Like) referring to the physical performance tests. Pearson correlation was computed to investigate the relation between the actual performance and the enjoyment. Results: Moderate significant correlations between each of the respective I Do, and I Like components were found. It appears that the correlation between the endurance items is higher as compared to the other six characteristics. Rerunning the analysis for age and sex groups separately resulted in only one significant correlation across all age group, namely between the evaluations of cardiovascular endurance. Conclusions: Information on enjoyment appears to be a useful and cost-effective addition to current multidimensional test batteries in a sport. By providing a clear picture on activities the young child or athlete likes or dislikes, attrition can be increased if a child starts his ‘career’ in a sport that alludes to skills or tasks he/she likes. This enjoyment will increase the intrinsic motivation, which is beneficial for sustained sports participation as well as for avoiding dropout in promising young athletes.Keywords: I Do, I Like, physical performance, enjoyment
Procedia PDF Downloads 1541207 Photo Electrical Response in Graphene Based Resistive Sensor
Authors: H. C. Woo, F. Bouanis, C. S. Cojocaur
Abstract:
Graphene, which consists of a single layer of carbon atoms in a honeycomb lattice, is an interesting potential optoelectronic material because of graphene’s high carrier mobility, zero bandgap, and electron–hole symmetry. Graphene can absorb light and convert it into a photocurrent over a wide range of the electromagnetic spectrum, from the ultraviolet to visible and infrared regimes. Over the last several years, a variety of graphene-based photodetectors have been reported, such as graphene transistors, graphene-semiconductor heterojunction photodetectors, graphene based bolometers. It is also reported that there are several physical mechanisms enabling photodetection: photovoltaic effect, photo-thermoelectric effect, bolometric effect, photogating effect, and so on. In this work, we report a simple approach for the realization of graphene based resistive photo-detection devices and the measurements of their photoelectrical response. The graphene were synthesized directly on the glass substrate by novel growth method patented in our lab. Then, the metal electrodes were deposited by thermal evaporation on it, with an electrode length and width of 1.5 mm and 300 μm respectively, using Co to fabricate simple graphene based resistive photosensor. The measurements show that the graphene resistive devices exhibit a photoresponse to the illumination of visible light. The observed re-sistance response was reproducible and similar after many cycles of on and off operations. This photoelectrical response may be attributed not only to the direct photocurrent process but also to the desorption of oxygen. Our work shows that the simple graphene resistive devices have potential in photodetection applications.Keywords: graphene, resistive sensor, optoelectronics, photoresponse
Procedia PDF Downloads 2871206 Comparison of the Positive and Indeterminate Rates of QuantiFERON-TB Gold In-Tube and T-SPOT. TB According to Age-group
Authors: Kina Kim
Abstract:
Background: There are two types of interferon-gamma release assays (IGRAs) in use for the detection of latent tuberculosis infection (LTBI), QuantiFERON-TB Gold In-tube (QFT-GIT) and T-SPOT.TB. There are some reports that IGRA results are affected by the patient's age. This study aims to compare the results of both IGRA tests according to age groups. Methods: We reviewed 54,882 samples referred to an independent reference laboratory (Seegene Medical Foundation, Seoul, Korea) for the diagnosis of LTBI from January 1, 2021, to December 31, 2021. This retrospective study enrolled 955 patients tested using QFT-GIT and 53,927 patients tested using T-SPOT.TB. The results of both IGRAs were divided in three age groups (0-9, 10-17, and ≥18-year old). The positive rates and the indeterminate rates between QFT-GIT and T-SPOT.TB were compared. We also evaluated the differences in positive and indeterminate rates by age-group. Results: The positive rate of QFT-GIT was 20.1% (192/955) and that of T-SPOT.TB was 8.7% (4704/53927) in overall patients. The positive rates of QFT-GIT in individuals aged 0-9, 10-17, and over 18-year old were 15.4%, 13.3%, and 22.0%, respectively. The positive rates of T-SPOT.TB were 8.9%, 2.0% and 8.8%,in each agegroup, respectively.The overall prevalence of indeterminate results was 2.1% (20/955) of QFT-GIT and 0.5% (270/53927) of T-SPOT.TB. The indeterminate rates of QFT-GIT in individuals aged 0-9, 10-17, and over 18 years were 0.4%, 6.7%, and 2.6%, respectively. The indeterminate rate of T-SPOT.TB were 0.5%, 0.7% and 0.5%,in each age group, respectively. Conclusion: Our findings suggest that T-SPOT.TB has a lower rate of positive results in overall patients and a lower rate of indeterminate results than those of QFT-GIT. The highest positive rate was found in the over 18 years group for QFT-GIT, but the positive rates of T-SPOT.TB was not significantly different among groups by age. QFT-GIT showed variable and higher indeterminate rates according to age group, but T-SPOT.TB showed lower rates in all age groups(<1%).Keywords: LTBI, IGRA, QFT-GIT, T-SPOT. TB
Procedia PDF Downloads 1251205 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection
Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis
Abstract:
This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller
Procedia PDF Downloads 1301204 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 76