Search results for: measurement accuracy
2001 A Review of Protocols and Guidelines Addressing the Exposure of Occupants to Electromagnetic Field (EMF) Radiation in Buildings
Authors: Shabnam Monadizadeh, Charles Kibert, Jiaxuan Li, Janghoon Woo, Ashish Asutosh, Samira Roostaei, Maryam Kouhirostami
Abstract:
A significant share of the technology that has emerged over the past several decades produces electromagnetic field (EMF) radiation. Communications devices, household appliances, industrial equipment, and medical devices all produce EMF radiation with a variety of frequencies, strengths, and ranges. Some EMF radiation, such as Extremely Low Frequency (ELF), Radio Frequency (RF), and the ionizing range have been shown to have harmful effects on human health. Depending on the frequency and strength of the radiation, EMF radiation can have health effects at the cellular level as well as at brain, nervous, and cardiovascular levels. Health authorities have enacted regulations locally and globally to set critical values to limit the adverse effects of EMF radiation. By introducing a more comprehensive field of EMF radiation study and practice, architects and designers can design for a safer electromagnetic (EM) indoor environment, and, as building and construction specialists, will be able to monitor and reduce EM radiation. This paper identifies the nature of EMF radiation in the built environment, the various EMF radiation sources, and its human health effects. It addresses European and US regulations for EMF radiation in buildings and provides a preliminary action plan. The challenges of developing measurement protocols for the various EMF radiation frequency ranges and determining the effects of EMF radiation on building occupants are discussed. This paper argues that a mature method for measuring EMF radiation in building environments and linking these measurements to human health impacts occupant health should be developed to provide adequate safeguards for human occupants of buildings for future research.Keywords: biological affection, electromagnetic field, building regulation, human health, healthy building, clean construction
Procedia PDF Downloads 1872000 Biopsy or Biomarkers: Which Is the Sample of Choice in Assessment of Liver Fibrosis?
Authors: S. H. Atef, N. H. Mahmoud, S. Abdrahman, A. Fattoh
Abstract:
Background: The aim of the study is to assess the diagnostic value of fibrotest and hyaluronic acid in discriminate between insignificant and significant fibrosis. Also, to find out if these parameters could replace liver biopsy which is currently used for selection of chronic hepatitis C patients eligible for antiviral therapy. Study design: This study was conducted on 52 patients with HCV RNA detected by polymerase chain reaction (PCR) who had undergone liver biopsy and attending the internal medicine clinic at Ain Shams University Hospital. Liver fibrosis was evaluated according to the METAVIR scoring system on a scale of F0 to F4. Biochemical markers assessed were: alpha-2 macroglobulin (α2-MG), apolipoprotein A1 (Apo-A1), haptoglobin, gamma-glutamyl transferase (GGT), total bilirubin (TB) and hyaluronic acid (HA). The fibrotest score was computed after adjusting for age and gender. Predictive values and ROC curves were used to assess the accuracy of fibrotest and HA results. Results: For fibrotest, the observed area under curve for the discrimination between minimal or no fibrosis (F0-F1) and significant fibrosis (F2-F4) was 0.6736 for cutoff value 0.19 with sensitivity of 84.2% and specificity of 85.7%. For HA, the sensitivity was 89.5% and specificity was 85.7% and area under curve was 0.540 at the best cutoff value 71 mg/dL. Multi-use of both parameters, HA at 71 mg/dL with fibrotest score at 0.22 give a sensitivity 89.5%, specificity 100 and efficacy 92.3% (AUC 0.895). Conclusion: The use of both fibrotest score and HA could be as alternative to biopsy in most patients with chronic hepaitis C putting in consideration some limitations of the proposed markers in evaluating liver fibrosis.Keywords: fibrotest, liver fibrosis, HCV RNA, biochemical markers
Procedia PDF Downloads 2891999 Efficiency of Grover’s Search Algorithm Implemented on Open Quantum System in the Presence of Drive-Induced Dissipation
Authors: Nilanjana Chanda, Rangeet Bhattacharyya
Abstract:
Grover’s search algorithm is the fastest possible quantum mechanical algorithm to search a certain element from an unstructured set of data of N items. The algorithm can determine the desired result in only O(√N) steps. It has been demonstrated theoretically and experimentally on two-qubit systems long ago. In this work, we investigate the fidelity of Grover’s search algorithm by implementing it on an open quantum system. In particular, we study with what accuracy one can estimate that the algorithm would deliver the searched state. In reality, every system has some influence on its environment. We include the environmental effects on the system dynamics by using a recently reported fluctuation-regulated quantum master equation (FRQME). We consider that the environment experiences thermal fluctuations, which leave its signature in the second-order term of the master equation through its appearance as a regulator. The FRQME indicates that in addition to the regular relaxation due to system-environment coupling, the applied drive also causes dissipation in the system dynamics. As a result, the fidelity is found to depend on both the drive-induced dissipative terms and the relaxation terms, and we find that there exists a competition between them, leading to an optimum drive amplitude for which the fidelity becomes maximum. For efficient implementation of the search algorithm, precise knowledge of this optimum drive amplitude is essential.Keywords: dissipation, fidelity, quantum master equation, relaxation, system-environment coupling
Procedia PDF Downloads 1081998 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX Through Fusion of Vision and 3+1D Millimeter Wave Radar
Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma
Abstract:
Unmanned Surface Vehicles (USVs) are valuable due to their ability to perform dangerous and time-consuming tasks on the water. Object detection tasks are significant in these applications. However, inherent challenges, such as the complex distribution of obstacles, reflections from shore structures, water surface fog, etc., hinder the performance of object detection of USVs. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. MMW radar is complementary to vision sensors, providing robust environmental information. The radar 3D point cloud is transferred to 2D radar pseudo image to unify radar and vision information format by utilizing the point transformer. We propose a multi-source object detection network (RV-YOLOX )based on radar-vision fusion for inland waterways environment. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.Keywords: inland waterways, YOLO, sensor fusion, self-attention
Procedia PDF Downloads 1341997 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir
Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali
Abstract:
Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics
Procedia PDF Downloads 1411996 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection
Authors: Mondher Yahyaoui
Abstract:
A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.Keywords: aileron deflection, camber-surface-bound vortices, classical VLM, generalized VLM, flap deflection
Procedia PDF Downloads 4371995 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 1151994 Application of GA Optimization in Analysis of Variable Stiffness Composites
Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani
Abstract:
Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.Keywords: beam structures, layerwise, optimization, variable stiffness
Procedia PDF Downloads 1481993 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs
Authors: Malo Pocheau-Lesteven, Olivier Le Maître
Abstract:
Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program
Procedia PDF Downloads 1611992 Strategic Mine Planning: A SWOT Analysis Applied to KOV Open Pit Mine in the Democratic Republic of Congo
Authors: Patrick May Mukonki
Abstract:
KOV pit (Kamoto Oliveira Virgule) is located 10 km from Kolwezi town, one of the mineral rich town in the Lualaba province of the Democratic Republic of Congo. The KOV pit is currently operating under the Katanga Mining Limited (KML), a Glencore-Gecamines (a State Owned Company) join venture. Recently, the mine optimization process provided a life of mine of approximately 10 years withnice pushbacks using the Datamine NPV Scheduler software. In previous KOV pit studies, we recently outlined the impact of the accuracy of the geological information on a long-term mine plan for a big copper mine such as KOV pit. The approach taken, discussed three main scenarios and outlined some weaknesses on the geological information side, and now, in this paper that we are going to develop here, we are going to highlight, as an overview, those weaknesses, strengths and opportunities, in a global SWOT analysis. The approach we are taking here is essentially descriptive in terms of steps taken to optimize KOV pit and, at every step, we categorized the challenges we faced to have a better tradeoff between what we called strengths and what we called weaknesses. The same logic is applied in terms of the opportunities and threats. The SWOT analysis conducted in this paper demonstrates that, despite a general poor ore body definition, and very rude ground water conditions, there is room for improvement for such high grade ore body.Keywords: mine planning, mine optimization, mine scheduling, SWOT analysis
Procedia PDF Downloads 2261991 Photoelectrochemical Water Splitting from Earth-Abundant CuO Thin Film Photocathode: Enhancing Performance and Photo-Stability through Deposition of Overlayers
Authors: Wilman Septina, Rajiv R. Prabhakar, Thomas Moehl, David Tilley
Abstract:
Cupric oxide (CuO) is a promising absorber material for the fabrication of scalable, low cost solar energy conversion devices, due to the high abundance and low toxicity of copper. It is a p-type semiconductor with a band gap of around 1.5 eV, absorbing a significant portion of the solar spectrum. One of the main challenges in using CuO as solar absorber in an aqueous system is its tendency towards photocorrosion, generating Cu2O and metallic Cu. Although there have been several reports of CuO as a photocathode for hydrogen production, it is unclear how much of the observed current actually corresponds to H2 evolution, as the inevitability of photocorrosion is usually not addressed. In this research, we investigated the effect of the deposition of overlayers onto CuO thin films for the purpose of enhancing its photostability as well as performance for water splitting applications. CuO thin film was fabricated by galvanic electrodeposition of metallic copper onto gold-coated FTO substrates, followed by annealing in air at 600 °C. Photoelectrochemical measurement of the bare CuO film using 1 M phosphate buffer (pH 6.9) under simulated AM 1.5 sunlight showed a current density of ca. 1.5 mA cm-2 (at 0.4 VRHE), which photocorroded to Cu metal upon prolonged illumination. This photocorrosion could be suppressed by deposition of 50 nm-thick TiO2, deposited by atomic layer deposition. In addition, we found that insertion of an n-type CdS layer, deposited by chemical bath deposition, between the CuO and TiO2 layers was able to enhance significantly the photocurrent compared to without the CdS layer. A photocurrent of over 2 mA cm-2 (at 0 VRHE) was observed using the photocathode stack FTO/Au/CuO/CdS/TiO2/Pt. Structural, electrochemical, and photostability characterizations of the photocathode as well as results on various overlayers will be presented.Keywords: CuO, hydrogen, photoelectrochemical, photostability, water splitting
Procedia PDF Downloads 2271990 Trace Analysis of Genotoxic Impurity Pyridine in Sitagliptin Drug Material Using UHPLC-MS
Authors: Bashar Al-Sabti, Jehad Harbali
Abstract:
Background: Pyridine is a reactive base that might be used in preparing sitagliptin. International Agency for Research on Cancer classifies pyridine in group 2B; this classification means that pyridine is possibly carcinogenic to humans. Therefore, pyridine should be monitored at the allowed limit in sitagliptin pharmaceutical ingredients. Objective: The aim of this study was to develop a novel ultra high performance liquid chromatography mass spectrometry (UHPLC-MS) method to estimate the quantity of pyridine impurity in sitagliptin pharmaceutical ingredients. Methods: The separation was performed on C8 shim-pack (150 mm X 4.6 mm, 5 µm) in reversed phase mode using a mobile phase of water-methanol-acetonitrile containing 4 mM ammonium acetate in gradient mode. Pyridine was detected by mass spectrometer using selected ionization monitoring mode at m/z = 80. The flow rate of the method was 0.75 mL/min. Results: The method showed excellent sensitivity with a quantitation limit of 1.5 ppm of pyridine relative to sitagliptin. The linearity of the method was excellent at the range of 1.5-22.5 ppm with a correlation coefficient of 0.9996. Recoveries values were between 93.59-103.55%. Conclusions: The results showed good linearity, precision, accuracy, sensitivity, selectivity, and robustness. The studied method was applied to test three batches of sitagliptin raw materials. Highlights: This method is useful for monitoring pyridine in sitagliptin during its synthesis and testing sitagliptin raw materials before using them in the production of pharmaceutical products.Keywords: genotoxic impurity, pyridine, sitagliptin, UHPLC -MS
Procedia PDF Downloads 971989 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study
Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple
Abstract:
There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection
Procedia PDF Downloads 1611988 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior
Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang
Abstract:
Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method
Procedia PDF Downloads 3161987 A Prevalence of Phonological Disorder in Children with Specific Language Impairment
Authors: Etim, Victoria Enefiok, Dada, Oluseyi Akintunde, Bassey Okon
Abstract:
Phonological disorder is a serious and disturbing issue to many parents and teachers. Efforts towards resolving the problem have been undermined by other specific disabilities which were hidden to many regular and special education teachers. It is against this background that this study was motivated to provide data on the prevalence of phonological disorders in children with specific language impairment (CWSLI) as the first step towards critical intervention. The study was a survey of 15 CWSLI from St. Louise Inclusive schools, Ikot Ekpene in Akwa Ibom State of Nigeria. Phonological Processes Diagnostic Scale (PPDS) with 17 short sentences, which cut across the five phonological processes that were examined, were validated by experts in test measurement, phonology and special education. The respondents were made to read the sentences with emphasis on the targeted sounds. Their utterances were recorded and analyzed in the language laboratory using Praat Software. Data were also collected through friendly interactions at different times from the clients. The theory of generative phonology was adopted for the descriptive analysis of the phonological processes. Data collected were analyzed using simple percentage and composite bar chart for better understanding of the result. The study found out that CWSLI exhibited the five phonological processes under investigation. It was revealed that 66.7%, 80%, 73.3%, 80%, and 86.7% of the respondents have severe deficit in fricative stopping, velar fronting, liquid gliding, final consonant deletion and cluster reduction, respectively. It was therefore recommended that a nationwide survey should be carried out to have national statistics of CWSLI with phonological deficits and develop intervention strategies for effective therapy to remediate the disorder.Keywords: language disorders, phonology, phonological processes, specific language impairment
Procedia PDF Downloads 1951986 Seismic Performance of Various Grades of Steel Columns through Finite Element Analysis
Authors: Asal Pournaghshband, Roham Maher
Abstract:
This study presents a numerical analysis of the cyclic behavior of H-shaped steel columns, focusing on different steel grades, including austenitic, ferritic, duplex stainless steel, and carbon steel. Finite Element (FE) models were developed and validated against experimental data, demonstrating a predictive accuracy of up to 6.5%. The study examined key parameters such as energy dissipation and failure modes. Results indicate that duplex stainless steel offers the highest strength, with superior energy dissipation but a tendency for brittle failure at maximum strains of 0.149. Austenitic stainless steel demonstrated balanced performance with excellent ductility and energy dissipation, showing a maximum strain of 0.122, making it highly suitable for seismic applications. Ferritic stainless steel, while stronger than carbon steel, exhibited reduced ductility and energy absorption. Carbon steel displayed the lowest performance in terms of energy dissipation and ductility, with significant strain concentrations leading to earlier failure. These findings provide critical insights into optimizing material selection for earthquake-resistant structures, balancing strength, ductility, and energy dissipation under seismic conditions.Keywords: energy dissipation, finite element analysis, H-shaped columns, seismic performance, stainless steel grades
Procedia PDF Downloads 301985 A Soft Computing Approach Monitoring of Heavy Metals in Soil and Vegetables in the Republic of Macedonia
Authors: Vesna Karapetkovska Hristova, M. Ayaz Ahmad, Julijana Tomovska, Biljana Bogdanova Popov, Blagojce Najdovski
Abstract:
The average total concentrations of heavy metals; (cadmium [Cd], copper [Cu], nickel [Ni], lead [Pb], and zinc [Zn]) were analyzed in soil and vegetables samples collected from the different region of Macedonia during the years 2010-2012. Basic soil properties such as pH, organic matter and clay content were also included in the study. The average concentrations of Cd, Cu, Ni, Pb, Zn in the A horizon (0-30 cm) of agricultural soils were as follows, respectively: 0.25, 5.3, 6.9, 15.2, 26.3 mg kg-1 of soil. We have found that neural networking model can be considered as a tool for prediction and spatial analysis of the processes controlling the metal transfer within the soil-and vegetables. The predictive ability of such models is well over 80% as compared to 20% for typical regression models. A radial basic function network reflects good predicting accuracy and correlation coefficients between soil properties and metal content in vegetables much better than the back-propagation method. Neural Networking / soft computing can support the decision-making processes at different levels, including agro ecology, to improve crop management based on monitoring data and risk assessment of metal transfer from soils to vegetables.Keywords: soft computing approach, total concentrations, heavy metals, agricultural soils
Procedia PDF Downloads 3701984 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time
Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma
Abstract:
Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.Keywords: multiclass classification, convolution neural network, OpenCV
Procedia PDF Downloads 1781983 Translation and Validation of the Pediatric Quality of Life Inventory for Children in Pakistani Context
Authors: Nazia Mustafa, Aneela Maqsood
Abstract:
Pediatric Quality of Life Inventory is the most widely used instrument for assessing children and adolescent health-related quality of life and has shown excellent markers of reliability and validity. The current study was carried out with the objectives of translation and cross-language validation along with the determination of factor Structure and psychometric properties of the Urdu version. It was administered on 154 Primary School Children with age range 10 to12 years (M= 10.86, S.D = 0.62); including boys (n=92) and girls (n = 62). The sample was recruited from two randomly selected schools from the Rawalpindi district of Pakistan. Results of the pilot phase revealed that the instrument had good reliability (Urdu Version α = 0.798; English Version α = 0.795) as well as test-retest correlation coefficients over a period of 15 days (r = 0.85). Exploratory factor analysis (EFA) resulted in three factorial structures; Social/School Functioning (k = 8), Psychological Functioning (k = 7) and Physical Functioning (k = 6) considered suitable for our sample instead of four factors. Bartlett's test of sphericity showed inter-correlation between variables. However, factor loadings for items 22 and 23 of the School Functioning subscale were problematic. The model was fit to the data after their removal with Cronbach’s Alpha Reliability coefficient of the scale (k = 21) as 0.87 and for subscales as 0.75, 0.77 and 0.73 for Social/School Scale, Psychological subscale and Physical subscale, respectively. These results supported the feasibility and reliability of the Urdu version of the Pediatric Quality of Life Inventory as a reliable and effective tool for the measurement of quality of life among Pediatrics Pakistani population.Keywords: primary school children, paediatric quality of life, exploratory factor analysis, Pakistan
Procedia PDF Downloads 1361982 Automatic Detection of Defects in Ornamental Limestone Using Wavelets
Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas
Abstract:
A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.Keywords: automatic detection, defects, fracture lines, wavelets
Procedia PDF Downloads 2491981 Performance of the Photovoltaic Module under Different Shading Patterns
Authors: E. T. El Shenawy, O. N. A. Esmail, Adel A. Elbaset, Hesham F. A. Hamed
Abstract:
Generation of the electrical energy based on photovoltaic (PV) technology has been increased over the world due to either the continuous reduction in the traditional energy sources in addition to the pollution problems related to their usage, or the clean nature and safe usage of the PV technology. Also, PV systems can generate clean electricity in the site of use without any transmission, which can be considered cost effective than other generation systems. The performance of the PV system is highly affected by the amount of solar radiation incident on it. Completely or partially shaded PV systems can affect its output. The PV system can be shaded by trees, buildings, dust, incorrect system configuration, or other obstacles. The present paper studies the effect of the partial shading on the performance of a thin film PV module under climatic conditions of Cairo, Egypt. This effect was measured and evaluated according to practical measurement of the characteristic curves such as current-voltage and power-voltage for two identical PV modules (with and without shading) placed at the same time on one mechanical structure for comparison. The measurements have been carried out for the following shading patterns; half cell (bottom, middle, and top of the PV module); complete cell; and two adjacent cells. The results showed that partially shading the PV module changes the shapes of the I-V and P-V curves and produces more than one maximum power point, that can disturb the traditional maximum power point trackers. Also, the output power from the module decreased according to the incomplete solar radiation reaching the PV module due to shadow patterns. The power loss due shading was 7%, 22%, and 41% for shading of half-cell, one cell, and two adjacent cells of the PV module, respectively.Keywords: I-V measurements, PV module characteristics, PV module power loss, PV module shading
Procedia PDF Downloads 1401980 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course
Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu
Abstract:
This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN
Procedia PDF Downloads 451979 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips
Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada
Abstract:
99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine
Procedia PDF Downloads 3241978 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization
Authors: Zhiyan Meng, Dan Liu, Jintao Meng
Abstract:
Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model
Procedia PDF Downloads 341977 Internal Evaluation of Architecture University Department in Architecture Engineering Bachelor's Level: A Case from Iran
Authors: Faranak Omidian
Abstract:
This study has been carried out to examine the status of architecture department at bachelor's level of engineering architecture in Islamic Azad University of Dezful in 2012-13 academic year. The present research is a descriptive cross sectional study and in terms of measurement, it is descriptive and analytical, which was done based on 7 steps and in 7 areas with 32 criteria and 169 indicators. The sample includes 201 students, 14 faculty members, 72 graduates and 39 employers. Simple random sampling method, complete enumeration method, network sampling (snowball sampling) were used for students, faculty members and graduates respectively. All sample responded to the questions. After data collection, the findings were ranked on Likert scale from desirable to undesirable with the scores ranging from 1 to 3.The results showed that the department with a score of 1.88 in regard to objectives, organizational status, management and organizations, with a score of 2 in relation to students, with a score of 1.8 in regard to faculty members was in a relatively desirable status. Regarding training courses and curriculum, it gained a score of 2.33 which indicates the desirable status of the department in this regard. It gained scores of 1.75, 2, and 1.8 with respect to educational and research facilities and equipment, teaching and learning strategies, and graduates respectively, all of which shows the relatively desirable status of the department. The results showed that the department of architecture, with an average score of 2.14 in all evaluated areas, was in a desirable situation. Therefore, although the department generally has a desirable status, it needs to put in more effort to tackle its weaknesses and shortages and corrects its defects in order to promote educational quality, taking to the desirable level.Keywords: internal evaluation, architecture department in Islamic, Azad University, Dezful
Procedia PDF Downloads 4461976 Assessing the Legacy Effects of Wildfire on Eucalypt Canopy Structure of South Eastern Australia
Authors: Yogendra K. Karna, Lauren T. Bennett
Abstract:
Fire-tolerant eucalypt forests are one of the major forest ecosystems of south-eastern Australia and thought to be highly resistant to frequent high severity wildfires. However, the impact of different severity wildfires on the canopy structure of fire-tolerant forest type is under-studied, and there are significant knowledge gaps in relation to the assessment of tree and stand level canopy structural dynamics and recovery after fire. Assessment of canopy structure is a complex task involving accurate measurements of the horizontal and vertical arrangement of the canopy in space and time. This study examined the utility of multitemporal, small-footprint lidar data to describe the changes in the horizontal and vertical canopy structure of fire-tolerant eucalypt forests seven years after wildfire of different severities from the tree to stand level. Extensive ground measurements were carried out in four severity classes to describe and validate canopy cover and height metrics as they change after wildfire. Several metrics such as crown height and width, crown base height and clumpiness of crown were assessed at tree and stand level using several individual tree top detection and measurement algorithm. Persistent effects of high severity fire 8 years after both on tree crowns and stand canopy were observed. High severity fire increased the crown depth but decreased the crown projective cover leading to more open canopy.Keywords: canopy gaps, canopy structure, crown architecture, crown projective cover, multi-temporal lidar, wildfire severity
Procedia PDF Downloads 1771975 Phishing Detection: Comparison between Uniform Resource Locator and Content-Based Detection
Authors: Nuur Ezaini Akmar Ismail, Norbazilah Rahim, Norul Huda Md Rasdi, Maslina Daud
Abstract:
A web application is the most targeted by the attacker because the web application is accessible by the end users. It has become more advantageous to the attacker since not all the end users aware of what kind of sensitive data already leaked by them through the Internet especially via social network in shake on ‘sharing’. The attacker can use this information such as personal details, a favourite of artists, a favourite of actors or actress, music, politics, and medical records to customize phishing attack thus trick the user to click on malware-laced attachments. The Phishing attack is one of the most popular attacks for social engineering technique against web applications. There are several methods to detect phishing websites such as Blacklist/Whitelist based detection, heuristic-based, and visual similarity-based detection. This paper illustrated a comparison between the heuristic-based technique using features of a uniform resource locator (URL) and visual similarity-based detection techniques that compares the content of a suspected phishing page with the legitimate one in order to detect new phishing sites based on the paper reviewed from the past few years. The comparison focuses on three indicators which are false positive and negative, accuracy of the method, and time consumed to detect phishing website.Keywords: heuristic-based technique, phishing detection, social engineering and visual similarity-based technique
Procedia PDF Downloads 1791974 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework
Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim
Abstract:
Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change
Procedia PDF Downloads 2211973 Optimization of Wire EDM Parameters for Fabrication of Micro Channels
Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg
Abstract:
Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro-scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the microchannels and to calculate the surface finish and material removal rate of microchannels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of a pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.Keywords: microchannels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), surface finish
Procedia PDF Downloads 5011972 A 1.57ghz Mixer Design for GPS Receiver
Authors: Hamd Ahmed
Abstract:
During the Persian Gulf War in 1991s, The confederation forces were surprised when they were being shot at by friendly forces in Iraqi desert. As obvious was the fact that they were mislead due to the lack of proper guidance and technology resulting in unnecessary loss of life and bloodshed. This unforeseen incident along with many others led the US department of defense to open the doors of GPS. In the very beginning, this technology was for military use, but now it is being widely used and increasingly popular among the public due to its high accuracy and immeasurable significance. The GPS system simply consists of three segments, the space segment (the satellite), the control segment (ground control) and the user segment (receiver). This project work is about designing a 1.57GHZ mixer for triple conversion GPS receiver .The GPS Front-End receiver based on super heterodyne receiver which improves selectivity and image frequency. However the main principle of the super heterodyne receiver depends on the mixer. Many different types of mixers (single balanced mixer, Single Ended mixer, Double balanced mixer) can be used with GPS receiver, it depends on the required specifications. This research project will provide an overview of the GPS system and details about the basic architecture of the GPS receiver. The basic emphasis of this report in on investigating general concept of the mixer circuit some terms related to the mixer along with their definitions and present the types of mixer, then gives some advantages of using singly balanced mixer and its application. The focus of this report is on how to design mixer for GPS receiver and discussing the simulation results.Keywords: GPS , RF filter, heterodyne, mixer
Procedia PDF Downloads 325