Search results for: computational accuracy
778 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering
Authors: Zelalem Fantahun
Abstract:
Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.Keywords: POS tagging, Amharic, unsupervised learning, k-means
Procedia PDF Downloads 451777 Open Fields' Dosimetric Verification for a Commercially-Used 3D Treatment Planning System
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed Elnagdy, Mohamed S. Yahiya, Rasha Moustafa
Abstract:
This study is to evaluate and investigate the dosimetric performance of our institution's 3D treatment planning system, Elekta PrecisePLAN, for open 6MV fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields guided by the recommended QA tests prescribed in AAPM TG53, NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Seven different tests were done applied on solid water equivalent phantom along with 2D array dose detection system, the calculated doses using 3D treatment planning system PrecisePLAN, compared with measured doses to make sure that the dose calculations are accurate for open fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields. The QA results showed dosimetric accuracy of the TPS for open fields within the specified tolerance limits. However large square (25cm x 25cm) and rectangular fields (20cm x 5cm) some points were out of tolerance in penumbra region (11.38 % and 10.9 %, respectively). For the test of SSD variation, the large field resulted from SSD 125 cm for 10cm x 10cm filed the results recorded an error of 0.2% at the central axis and 1.01% in penumbra. The results yielded differences within the accepted tolerance level as recommended. Large fields showed variations in penumbra. These differences between dose values predicted by the TPS and the measured values at the same point may result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, 3D treatment planning system, photon beam
Procedia PDF Downloads 517776 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 20775 Methodologies, Findings, Discussion, and Limitations in Global, Multi-Lingual Research: We Are All Alone - Chinese Internet Drama
Authors: Patricia Portugal Marques de Carvalho Lourenco
Abstract:
A three-phase methodological multi-lingual path was designed, constructed and carried out using the 2020 Chinese Internet Drama Series We Are All Alone as a case study. Phase one, the backbone of the research, comprised of secondary data analysis, providing the structure on which the next two phases would be built on. Phase one incorporated a Google Scholar and a Baidu Index analysis, Star Network Influence Index and Mydramalist.com top two drama reviews, along with an article written about the drama and scrutiny of Chinese related blogs and websites. Phase two was field research elaborated across Latin Europe, and phase three was social media focused, having into account that perceptions are going to be memory conditioned based on past ideas recall. Overall, research has shown the poor cultural expression of Chinese entertainment in Latin Europe and demonstrated the inexistence of Chinese content in French, Italian, Portuguese and Spanish Business to Consumer retailers; a reflection of their low significance in Latin European markets and the short-life cycle of entertainment products in general, bubble-gum, disposable goods without a mid to long-term effect in consumers lives. The process of conducting comprehensive international research was complex and time-consuming, with data not always available in Mandarin, the researcher’s linguistic deficiency, limited Chinese Cultural Knowledge and cultural equivalence. Despite steps being taken to minimize the international proposed research, theoretical limitations concurrent to Latin Europe and China still occurred. Data accuracy was disputable; sampling, data collection/analysis methods are heterogeneous; ascertaining data requirements and the method of analysis to achieve a construct equivalence was challenging and morose to operationalize. Secondary data was also not often readily available in Mandarin; yet, in spite of the array of limitations, research was done, and results were produced.Keywords: research methodologies, international research, primary data, secondary data, research limitations, online dramas, china, latin europe
Procedia PDF Downloads 68774 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas
Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman
Abstract:
This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.Keywords: doppler radar, FMCW, range detection, speed detection
Procedia PDF Downloads 398773 Computational Insight into a Mechanistic Overview of Water Exchange Kinetics and Thermodynamic Stabilities of Bis and Tris-Aquated Complexes of Lanthanides
Authors: Niharika Keot, Manabendra Sarma
Abstract:
A thorough investigation of Ln3+ complexes with more than one inner-sphere water molecule is crucial for designing high relaxivity contrast agents (CAs) used in magnetic resonance imaging (MRI). This study accomplished a comparative stability analysis of two hexadentate (H3cbda and H3dpaa) and two heptadentate (H4peada and H3tpaa) ligands with Ln3+ ions. The higher stability of the hexadentate H3cbda and heptadentate H4peada ligands has been confirmed by the binding affinity and Gibbs free energy analysis in aqueous solution. In addition, energy decomposition analysis (EDA) reveals the higher binding affinity of the peada4− ligand than the cbda3− ligand towards Ln3+ ions due to the higher charge density of the peada4− ligand. Moreover, a mechanistic overview of water exchange kinetics has been carried out based on the strength of the metal–water bond. The strength of the metal–water bond follows the trend Gd–O47 (w) > Gd–O39 (w) > Gd–O36 (w) in the case of the tris-aquated [Gd(cbda)(H2O)3] and Gd–O43 (w) > Gd–O40 (w) for the bis-aquated [Gd(peada)(H2O)2]− complex, which was confirmed by bond length, electron density (ρ), and electron localization function (ELF) at the corresponding bond critical points. Our analysis also predicts that the activation energy barrier decreases with the decrease in bond strength; hence kex increases. The 17O and 1H hyperfine coupling constant values of all the coordinated water molecules were different, calculated by using the second-order Douglas–Kroll–Hess (DKH2) approach. Furthermore, the ionic nature of the bonding in the metal–ligand (M–L) bond was confirmed by the Quantum Theory of Atoms-In-Molecules (QTAIM) and ELF along with energy decomposition analysis (EDA). We hope that the results can be used as a basis for the design of highly efficient Gd(III)-based high relaxivity MRI contrast agents for medical applications.Keywords: MRI contrast agents, lanthanide chemistry, thermodynamic stability, water exchange kinetics
Procedia PDF Downloads 83772 The Spatial Pattern of Economic Rents of an Airport Development Area: Lessons Learned from the Suvarnabhumi International Airport, Thailand
Authors: C. Bejrananda, Y. Lee, T. Khamkaew
Abstract:
With the rise of the importance of air transportation in the 21st century, the role of economics in airport planning and decision-making has become more important to the urban structure and land value around it. Therefore, this research aims to examine the relationship between an airport and its impacts on the distribution of urban land uses and land values by applying the Alonso’s bid rent model. The New Bangkok International Airport (Suvarnabhumi International Airport) was taken as a case study. The analysis was made over three different time periods of airport development (after the airport site was proposed, during airport construction, and after the opening of the airport). The statistical results confirm that Alonso’s model can be used to explain the impacts of the new airport only for the northeast quadrant of the airport, while proximity to the airport showed the inverse relationship with the land value of all six types of land use activities through three periods of time. It indicates that the land value for commercial land use is the most sensitive to the location of the airport or has the strongest requirement for accessibility to the airport compared to the residential and manufacturing land use. Also, the bid-rent gradients of the six types of land use activities have declined dramatically through the three time periods because of the Asian Financial Crisis in 1997. Therefore, the lesson learned from this research concerns about the reliability of the data used. The major concern involves the use of different areal units for assessing land value for different time periods between zone block (1995) and grid block (2002, 2009). As a result, this affect the investigation of the overall trends of land value assessment, which are not readily apparent. In addition, the next concern is the availability of the historical data. With the lack of collecting historical data for land value assessment by the government, some of data of land values and aerial photos are not available to cover the entire study area. Finally, the different formats of using aerial photos between hard-copy (1995) and digital photo (2002, 2009) made difficult for measuring distances. Therefore, these problems also affect the accuracy of the results of the statistical analyses.Keywords: airport development area, economic rents, spatial pattern, suvarnabhumi international airport
Procedia PDF Downloads 274771 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination
Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana
Abstract:
Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization
Procedia PDF Downloads 506770 Caregiver Training Results in Accurate Reporting of Stool Frequency
Authors: Matthew Heidman, Susan Dallabrida, Analice Costa
Abstract:
Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials
Procedia PDF Downloads 96769 Simulating the Surface Runoff for the Urbanized Watershed of Mula-Mutha River from Western Maharashtra, India
Authors: Anargha A. Dhorde, Deshpande Gauri, Amit G. Dhorde
Abstract:
Mula-Mutha basin is one of the speedily urbanizing watersheds, wherein two major urban centers, Pune and Pimpri-Chinchwad, have developed at a shocking rate in the last two decades. Such changing land use/land cover (LULC) is prone to hydrological problems and flash floods are a frequent, eventuality in the lower reaches of the basin. The present research brings out the impact of varying LULC, impervious surfaces on urban surface hydrology and generates storm-runoff scenarios for the hydrological units. The two multi-temporal satellite images were processed and supervised classification is performed with > 75% accuracy. The built-up has increased from 14.4% to 34.37% in the 28 years span, which is concentrated in and around the Pune-PCMC region. Impervious surfaces that were obtained by population calibrated multiple regression models. Almost 50% area of the watershed is impervious, which attribute to increase surface runoff and flash floods. The SCS-CN method was employed to calculate surface runoff of the watershed. The comparison between calculated and measured values of runoff was performed in a statistically precise way which shows no significant difference. Increasing built-up areas, as well as impervious surface areas due to rapid urbanization and industrialization, may lead to generating high runoff volumes in the basin especially in the urbanized areas of the watershed and along the major transportation arteries. Simulations generated with 50 mm and 100 mm rainstorm depth conspicuously noted that most of the changes in terms of increased runoff are constricted to the highly urbanized areas. Considering whole watershed area, the runoff values 39 m³ generated with 1'' rainfall whereas only urbanized areas of the basin (Pune and Pimpari-Chinchwad) were generated 11154 m³ runoff. Such analysis is crucial in providing information regarding their intensity and location, which proves instrumental in their analysis in order to formulate proper mitigation measures and rehabilitation strategies.Keywords: land use/land cover, LULC, impervious surfaces, surface hydrology, storm-runoff scenarios
Procedia PDF Downloads 218768 Experimental and Numerical Study of Ultra-High-Performance Fiber-Reinforced Concrete Column Subjected to Axial and Eccentric Loads
Authors: Chengfeng Fang, Mohamed Ali Sadakkathulla, Abdul Sheikh
Abstract:
Ultra-high-performance fiber reinforced concrete (UHPFRC) is a specially formulated cement-based composite characterized with an ultra-high compressive strength (fc’ = 240 MPa) and a low water-cement ratio (W/B= 0.2). With such material characteristics, UHPFRC is favored for the design and constructions of structures required high structural performance and slender geometries. Unlike conventional concrete, the structural performance of members manufactured with UHPFRC has not yet been fully studied, particularly, for UHPFRC columns with high slenderness. In this study, the behaviors of slender UHPFRC columns under concentric or eccentric load will be investigated both experimentally and numerically. Four slender UHPFRC columns were tested under eccentric loads with eccentricities, of 0 mm, 35 mm, 50 mm, and 85 mm, respectively, and one UHPFRC beam was tested under four-point bending. Finite element (FE) analysis was conducted with concrete damage plasticity (CDP) modulus to simulating the load-middle height or middle span deflection relationships and damage patterns of all UHPFRC members. Simulated results were compared against the experimental results and observation to gain the confidence of FE model, and this model was further extended to conduct parametric studies, which aim to investigate the effects of slenderness regarding failure modes and load-moment interaction relationships. Experimental results showed that the load bearing capacities of the slender columns reduced with an increase in eccentricity. Comparisons between load-middle height and middle span deflection relationships as well as damage patterns of all UHPFRC members obtained both experimentally and numerically demonstrated high accuracy of the FE simulations. Based on the available FE model, the following parametric study indicated that a further increase in the slenderness of column resulted in significant decreases in the load-bearing capacities, ductility index, and flexural bending capacities.Keywords: eccentric loads, ductility index, RC column, slenderness, UHPFRC
Procedia PDF Downloads 130767 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 209766 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 120765 Study Variation of Blade Angle on the Performance of the Undershot Waterwheel on the Pico Scale
Authors: Warjito, Kevin Geraldo, Budiarso, Muhammad Mizan, Rafi Adhi Pranata, Farhan Rizqi Syahnakri
Abstract:
According to data from 2021, the number of households in Indonesia that have access to on-grid electricity is claimed to have reached 99.28%, which means that around 0.7% of Indonesia's population (1.95 million people) still have no proper access to electricity and 38.1% of it comes from remote areas in Nusa Tenggara Timur. Remote areas are classified as areas with a small population of 30 to 60 families, have limited infrastructure, have scarce access to electricity and clean water, have a relatively weak economy, are behind in access to technological innovation, and earn a living mostly as farmers or fishermen. These people still need electricity but can’t afford the high cost of electricity from national on-grid sources. To overcome this, it is proposed that a hydroelectric power plant driven by a pico-hydro turbine with an undershot water wheel will be a suitable pico-hydro turbine technology because of the design, materials and installation of the turbine that is believed to be easier (i.e., operational and maintenance) and cheaper (i.e., investment and operating costs) than any other type. The comparative study of the angle of the undershot water wheel blades will be discussed comprehensively. This study will look into the best variation of curved blades on an undershot water wheel that produces maximum hydraulic efficiency. In this study, the blade angles were varied by 180 ̊, 160 ̊, and 140 ̊. Two methods of analysis will be used, which are analytical and numerical methods. The analytical method will be based on calculations of the amount of torque and rotational speed of the turbine, which is used to obtain the input and output power of the turbine. Whereas the numerical method will use the ANSYS application to simulate the flow during the collision with the designed turbine blades. It can be concluded, based on the analytical and numerical methods, that the best angle for the blade is 140 ̊, with an efficiency of 43.52% for the analytical method and 37.15% for the numerical method.Keywords: pico hydro, undershot waterwheel, blade angle, computational fluid dynamics
Procedia PDF Downloads 77764 Boundary Layer Control Using a Magnetic Field: A Case Study in the Framework of Ferrohydrodynamics
Authors: C. F. Alegretti, F. R. Cunha, R. G. Gontijo
Abstract:
This work investigates the effects of an applied magnetic field on the geometry-driven boundary layer detachment flow of a ferrofluid over a sudden expansion. Both constitutive equation and global magnetization equation for a ferrofluid are considered. Therefore, the proposed formulation consists in a coupled magnetic-hydrodynamic problem. Computational simulations are carried out in order to explore, not only the viability to control flow instabilities, but also to evaluate the consistency of theoretical aspects. The unidirectional sudden expansion in a ferrofluid flow is investigated numerically under the perspective of Ferrohydrodynamics in a two-dimensional domain using a Finite Differences Method. The boundary layer detachment induced by the sudden expansion results in a recirculating zone, which has been extensively studied in non-magnetic hydrodynamic problems for a wide range of Reynolds numbers. Similar investigations can be found in literature regarding the sudden expansion under the magnetohydrodynamics framework, but none considering a colloidal suspension of magnetic particles out of the superparamagnetic regime. The vorticity-stream function formulation is implemented and results in a clear coupling between the flow vorticity and its magnetization field. Our simulations indicate a systematic decay on the length of the recirculation zone as increasing physical parameters of the flow, such as the intensity of the applied field and the volume fraction of particles. The results all are discussed from a physical point of view in terms of the dynamical non-dimensional parameters. We argue that the decrease/reduction in the recirculation region of the flow is a direct consequence of the magnetic torque balancing the action of the torque produced by viscous and inertial forces of the flow. For the limit of small Reynolds and magnetic Reynolds parameters, the diffusion of vorticity balances the diffusion of the magnetic torque on the flow. These mechanics control the growth of the recirculation region.Keywords: boundary layer detachment, ferrofluid, ferrohydrodynamics, magnetization, sudden expansion
Procedia PDF Downloads 203763 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor
Authors: D. Ramajo, S. Corzo, M. Nigro
Abstract:
A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow
Procedia PDF Downloads 468762 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining
Procedia PDF Downloads 172761 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures
Authors: Samir Chawdhury, Guido Morgenthal
Abstract:
The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis
Procedia PDF Downloads 311760 Biochemical Efficacy, Molecular Docking and Inhibitory Effect of 2,3-Dimethylmaleic Anhydride on Acetylcholinesterases
Authors: Kabrambam D. Singh, Dinabandhu Sahoo, Yallappa Rajashekar
Abstract:
Evolution has caused many insects to develop resistance to several synthetic insecticides. This problem along with the persisting concern regarding the health and environmental safety issues of the existing synthetic insecticides has urged the scientific fraternity to look for a new plant-based natural insecticide with inherent eco-friendly nature. Colocasia esculenta var. esculenta (L.) Schott (Araceae family) is widely grown throughout the South- East Asian Countries for its edible corms and leaves. Various physico-chemical and spectroscopic techniques (IR, 1H NMR, 13C NMR and Mass) were used for the isolation and characterization of isolated bioactive molecule named 2, 3-dimethylmaleic anhydride (3, 4-dimethyl-2, 5-furandione). This compound was found to be highly toxic, even at low concentration, against several storage grain pests when used as biofumigant. Experimental studies on the mode of action of 2, 3-dimethylmaleic anhydride revealed that the biofumigant act as inhibitor of acetylcholinesterase enzyme in cockroach and stored grain insects. The knockdown activity of bioactive compound is concurrent with in vivo inhibition of AChE; at KD99 dosage of bioactive molecule showed more than 90% inhibition of AChE activity in test insects. The molecule proved to affect the antioxidant enzyme system; superoxide dismutase (SOD), and catalase (CAT) and also found to decrease reduced glutathione (GSH) level in the treated insects. The above results indicate involvement of inhibition of AChE activity and oxidative imbalance as the potential mode of action of 2, 3-dimethylmaleic anhydride. In addition, the study reveals computational docking programs elaborate the possible interaction of 2, 3-dimethylmaleic anhydride with enzyme acetylcholinesterase (AChE) of Periplaneta americana. Finally, the results represent that toxicity of 2, 3-dimethylmaleic anhydride might be associated with inhibition of AChE activity and oxidative imbalance.Keywords: 2, 3-dimethylmaleic anhydride, Colocasia esculenta var. esculenta (L.) Schott, Biofumigant, acetylcholinesterase, antioxidant enzyme, molecular docking
Procedia PDF Downloads 160759 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations
Authors: Yanjie Zhu, André Jesus, Irwanda Laory
Abstract:
Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)
Procedia PDF Downloads 304758 The Effects of Damping Devices on Displacements, Velocities and Accelerations of Structures
Authors: Radhwane Boudjelthia
Abstract:
The most recent earthquakes that occurred in the world and particularly in Algeria, have killed thousands of people and severe damage. The example that is etched in our memory is the last earthquake in the regions of Boumerdes and Algiers (Boumerdes earthquake of May 21, 2003). For all the actors involved in the building process, the earthquake is the litmus test for construction. The goal we set ourselves is to contribute to the implementation of a thoughtful approach to the seismic protection of structures. For many engineers, the most conventional approach protection works (buildings and bridges) the effects of earthquakes is to increase rigidity. This approach is not always effective, especially when there is a context that favors the phenomenon of resonance and amplification of seismic forces. Therefore, the field of earthquake engineering has made significant inroads among others catalyzed by the development of computational techniques in computer form and the use of powerful test facilities. This has led to the emergence of several innovative technologies, such as the introduction of special devices insulation between infrastructure and superstructure. This approach, commonly known as "seismic isolation" to absorb the significant efforts without the structure is damaged and thus ensuring the protection of lives and property. In addition, the restraints to the construction by the ground shaking are located mainly at the supports. With these moves, the natural period of construction is increasing, and seismic loads are reduced. Thus, there is an attenuation of the seismic movement. Likewise, the insulation of the base mechanism may be used in combination with earthquake dampers in order to control the deformation of the insulation system and the absolute displacement of the superstructure located above the isolation interface. On the other hand, only can use these earthquake dampers to reduce the oscillation amplitudes and thus reduce seismic loads. The use of damping devices represents an effective solution for the rehabilitation of existing structures. Given all these acceleration reducing means considered passive, much research has been conducted for several years to develop an active control system of the response of buildings to earthquakes.Keywords: earthquake, building, seismic forces, displacement, resonance, response
Procedia PDF Downloads 127757 Application of a Synthetic DNA Reference Material for Optimisation of DNA Extraction and Purification for Molecular Identification of Medicinal Plants
Authors: Mina Kalantarzadeh, Claire Lockie-Williams, Caroline Howard
Abstract:
DNA barcoding is increasingly used for identification of medicinal plants worldwide. In the last decade, a large number of DNA barcodes have been generated, and their application in species identification explored. The success of DNA barcoding process relies on the accuracy of the results from polymerase chain reaction (PCR) amplification step which could be negatively affected due to a presence of inhibitors or degraded DNA in herbal samples. An established DNA reference material can be used to support molecular characterisation protocols and prove system suitability, for fast and accurate identification of plant species. The present study describes the use of a novel reference material, the trnH-psbA British Pharmacopoeia Nucleic Acid Reference Material (trnH-psbA BPNARM), which was produced to aid in the identification of Ocimum tenuiflorum L., a widely used herb. During DNA barcoding of O. tenuiflorum, PCR amplifications of isolated DNA produced inconsistent results, suggesting an issue with either the method or DNA quality of the tested samples. The trnH-psbA BPNARM was produced and tested to check for the issues caused during PCR amplification. It was added to the plant material as control DNA before extraction and was co-extracted and amplified by PCR. PCR analyses revealed that the amplification was not as successful as expected which suggested that the amplification is affected by presence of inhibitors co-extracted from plant materials. Various potential issues were assessed during DNA extraction and optimisations were made accordingly. A DNA barcoding protocol for O. tenuiflorum was published in the British Pharmacopoeia 2016, which included the reference sequence. The trnH-psbA BPNARM accelerated degradation test which investigates the stability of the reference material over time demonstrated that it has been stable when stored at 56 °C for a year. Using this protocol and trnH-psbA reference material provides a fast and accurate method for identification of O. tenuiflorum. The optimisations of the DNA extraction using the trnH-psbA BPNARM provided a signposting method which can assist in overcoming common problems encountered when using molecular methods with medicinal plants.Keywords: degradation, DNA extraction, nucleic acid reference material, trnH-psbA
Procedia PDF Downloads 199756 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province
Authors: Yujie Zhao, Jiantao Weng
Abstract:
In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.Keywords: air infiltration, commercial complex, heat consumption, CFD simulation
Procedia PDF Downloads 132755 Contrast Media Effects and Radiation Dose Assessment in Contrast Enhanced Computed Tomography
Authors: Buhari Samaila, Sabiu Abdullahi, Buhari Maidamma
Abstract:
Background: Contrast-enhanced computed tomography (CE-CT) is a technique that uses contrast media to improve image quality and diagnostic accuracy. It is a widely used imaging modality in medical diagnostics, offering high-resolution images for accurate diagnosis. However, concerns regarding the potential adverse effects of contrast media and radiation dose exposure have prompted ongoing investigation and assessment. It is important to assess the effects of contrast media and radiation dose in CE-CT procedures. Objective: This study aims to assess the effects of contrast media and radiation dose in contrast-enhanced computed tomography (CECT) procedures. Methods: A comprehensive review of the literature was conducted to identify studies related to contrast media effects and radiation dose assessment in CECT. Relevant data, including location, type of research, objective, method, findings, conclusion, authors, and year of publications, were extracted, analyzed, and reported. Results: The findings revealed that several studies have investigated the impacts of contrast media and radiation doses in CECT procedures, with iodinated contrast agents being the most commonly employed. Adverse effects associated with contrast media administration were reported, including allergic reactions, nephrotoxicity, and thyroid dysfunction, albeit at relatively low incidence rates. Additionally, radiation dose levels varied depending on the imaging protocol and anatomical region scanned. Efforts to minimize radiation exposure through optimization techniques were evident across studies. Conclusion: Contrast-enhanced computed tomography (CECT) remains an invaluable tool in medical imaging; however, careful consideration of contrast media effects and radiation dose exposure is imperative. Healthcare practitioners should weigh the diagnostic benefits against potential risks, employing strategies to mitigate adverse effects and optimize radiation dose levels for patient safety and effective diagnosis. Further research is warranted to enhance the understanding and management of contrast media effects and radiation dose optimization in CECT procedures.Keywords: CT, contrast media, radiation dose, effect of radiation
Procedia PDF Downloads 21754 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device
Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer
Procedia PDF Downloads 57753 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 149752 Climate Change Winners and Losers: Contrasting Responses of Two Aphaniops Species in Oman
Authors: Aziza S. Al Adhoobi, Amna Al Ruheili, Saud M. Al Jufaili
Abstract:
This study investigates the potential effects of climate change on the habitat suitability of two Aphaniops species (Teleostei: Aphaniidae) found in the Oman Mountains and the Southwestern Arabian Coast. Aphaniops kruppi, an endemic species, is found in various water bodies such as wadis, springs, aflaj, spring-fed streams, and some coastal backwaters. Aphaniops stoliczkanus, on the other hand, inhabits brackish and freshwater habitats, particularly in the lower parts of wadies and aflaj, and exhibits euryhaline characteristics. Using Maximum Entropy Modeling (MaxEnt) in conjunction with ArcGIS (10.8.2) and CHELSA bioclimatic variables, topographic indices, and other pertinent environmental factors, the study modeled the potential impacts of climate change based on three Representative Concentration Pathways (RCPs 2.6, 7.0, 8.5) for the periods 2011-2040, 2041-2070, and 2071-2100. The model demonstrated exceptional predictive accuracy, achieving AUC values of 0.992 for A. kruppi and 0.983 for A. stoliczkanus. For A. kruppi, the most influential variables were the mean monthly climate moisture index (Cmi_m), the mean diurnal range (Bio2), and the sediment transport index (STI), accounting for 39.9%, 18.3%, and 8.4%, respectively. As for A. stoliczkanus, the key variables were the sediment transport index (STI), stream power index (SPI), and precipitation of the coldest quarter (Bio19), contributing 31%, 20.2%, and 13.3%, respectively. A. kruppi showed an increase in habitat suitability, especially in low and medium suitability areas. By 2071-2100, high suitability areas increased slightly by 0.05% under RCP 2.6, but declined by -0.02% and -0.04% under RCP 7.0 and 8.5, respectively. A. stoliczkanus exhibited a broader range of responses. Under RCP 2.6, all suitability categories increased by 2071-2100, with high suitability areas increasing by 0.01%. However, low and medium suitability areas showed mixed trends under RCP 7.0 and 8.5, with declines of -0.17% and -0.16%, respectively. The study highlights that climatic and topographical factors significantly influence the habitat suitability of Aphaniops species in Oman. Therefore, species-specific conservation strategies are crucial to address the impacts of climate change.Keywords: Aphaniops kruppi, Aphaniops stoliczkanus, Climate change, Habitat suitability, MaxEnt
Procedia PDF Downloads 17751 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: heuristic, MIP model, remedial course, school, timetabling
Procedia PDF Downloads 605750 Revolutionizing Legal Drafting: Leveraging Artificial Intelligence for Efficient Legal Work
Authors: Shreya Poddar
Abstract:
Legal drafting and revising are recognized as highly demanding tasks for legal professionals. This paper introduces an approach to automate and refine these processes through the use of advanced Artificial Intelligence (AI). The method employs Large Language Models (LLMs), with a specific focus on 'Chain of Thoughts' (CoT) and knowledge injection via prompt engineering. This approach differs from conventional methods that depend on comprehensive training or fine-tuning of models with extensive legal knowledge bases, which are often expensive and time-consuming. The proposed method incorporates knowledge injection directly into prompts, thereby enabling the AI to generate more accurate and contextually appropriate legal texts. This approach substantially decreases the necessity for thorough model training while preserving high accuracy and relevance in drafting. Additionally, the concept of guardrails is introduced. These are predefined parameters or rules established within the AI system to ensure that the generated content adheres to legal standards and ethical guidelines. The practical implications of this method for legal work are considerable. It has the potential to markedly lessen the time lawyers allocate to document drafting and revision, freeing them to concentrate on more intricate and strategic facets of legal work. Furthermore, this method makes high-quality legal drafting more accessible, possibly reducing costs and expanding the availability of legal services. This paper will elucidate the methodology, providing specific examples and case studies to demonstrate the effectiveness of 'Chain of Thoughts' and knowledge injection in legal drafting. The potential challenges and limitations of this approach will also be discussed, along with future prospects and enhancements that could further advance legal work. The impact of this research on the legal industry is substantial. The adoption of AI-driven methods by legal professionals can lead to enhanced efficiency, precision, and consistency in legal drafting, thereby altering the landscape of legal work. This research adds to the expanding field of AI in law, introducing a method that could significantly alter the nature of legal drafting and practice.Keywords: AI-driven legal drafting, legal automation, futureoflegalwork, largelanguagemodels
Procedia PDF Downloads 64749 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange
Authors: Fatemeh Rouhi, Hadi Nassiri
Abstract:
Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences
Procedia PDF Downloads 322