Search results for: predictive accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4532

Search results for: predictive accuracy

692 The Importance of Artificial Intelligence in Various Healthcare Applications

Authors: Joshna Rani S., Ahmadi Banu

Abstract:

Artificial Intelligence (AI) has a significant task to carry out in the medical care contributions of things to come. As AI, it is the essential capacity behind the advancement of accuracy medication, generally consented to be a painfully required development in care. Albeit early endeavors at giving analysis and treatment proposals have demonstrated testing, we anticipate that AI will at last dominate that area too. Given the quick propels in AI for imaging examination, it appears to be likely that most radiology, what's more, pathology pictures will be inspected eventually by a machine. Discourse and text acknowledgment are now utilized for assignments like patient correspondence and catch of clinical notes, and their utilization will increment. The best test to AI in these medical services areas isn't regardless of whether the innovations will be sufficiently skilled to be valuable, but instead guaranteeing their appropriation in day by day clinical practice. For far reaching selection to happen, AI frameworks should be affirmed by controllers, coordinated with EHR frameworks, normalized to an adequate degree that comparative items work likewise, instructed to clinicians, paid for by open or private payer associations, and refreshed over the long haul in the field. These difficulties will, at last, be survived, yet they will take any longer to do as such than it will take for the actual innovations to develop. Therefore, we hope to see restricted utilization of AI in clinical practice inside 5 years and more broad use inside 10 years. It likewise appears to be progressively evident that AI frameworks won't supplant human clinicians for a huge scope, yet rather will increase their endeavors to really focus on patients. Over the long haul, human clinicians may advance toward errands and work plans that draw on remarkably human abilities like sympathy, influence, and higher perspective mix. Maybe the lone medical services suppliers who will chance their professions over the long run might be the individuals who will not work close by AI

Keywords: artificial intellogence, health care, breast cancer, AI applications

Procedia PDF Downloads 181
691 A Comparative Study of Black Carbon Emission Characteristics from Marine Diesel Engines Using Light Absorption Method

Authors: Dongguk Im, Gunfeel Moon, Younwoo Nam, Kangwoo Chun

Abstract:

Recognition of the needs about protecting environment throughout worldwide is widespread. In the shipping industry, International Maritime Organization (IMO) has been regulating pollutants emitted from ships by MARPOL 73/78. Recently, the Marine Environment Protection Committee (MEPC) of IMO, at its 68th session, approved the definition of Black Carbon (BC) specified by the following physical properties (light absorption, refractory, insolubility and morphology). The committee also agreed to the need for a protocol for any voluntary measurement studies to identify the most appropriate measurement methods. Filter Smoke Number (FSN) based on light absorption is categorized as one of the IMO relevant BC measurement methods. EUROMOT provided a FSN measurement data (measured by smoke meter) of 31 different engines (low, medium and high speed marine engines) of member companies at the 3rd International Council on Clean Transportation (ICCT) workshop on marine BC. From the comparison of FSN, the results indicated that BC emission from low speed marine diesel engines was ranged from 0.009 to 0.179 FSN and it from medium and high speed marine diesel engine was ranged 0.012 to 3.2 FSN. In consideration of measured the low FSN from low speed engine, an experimental study was conducted using both a low speed marine diesel engine (2 stroke, power of 7,400 kW at 129 rpm) and a high speed marine diesel engine (4 stroke, power of 403 kW at 1,800 rpm) under E3 test cycle. The results revealed that FSN was ranged from 0.01 to 0.16 and 1.09 to 1.35 for low and high speed engines, respectively. The measurement equipment (smoke meter) ranges from 0 to 10 FSN. Considering measurement range of it, FSN values from low speed engines are near the detection limit (0.002 FSN or ~0.02 mg/m3). From these results, it seems to be modulated the measurement range of the measurement equipment (smoke meter) for enhancing measurement accuracy of marine BC and evaluation on performance of BC abatement technologies.

Keywords: black carbon, filter smoke number, international maritime organization, marine diesel engine (two and four stroke), particulate matter

Procedia PDF Downloads 276
690 Finite Element Modeling of a Lower Limb Based on the East Asian Body Characteristics for Pedestrian Protection

Authors: Xianping Du, Runlu Miao, Guanjun Zhang, Libo Cao, Feng Zhu

Abstract:

Current vehicle safety standards and human body injury criteria were established based on the biomechanical response of Euro-American human body, without considering the difference in the body anthropometry and injury characteristics among different races, particularly the East Asian people with smaller body size. Absence of such race specific design considerations will negatively influence the protective performance of safety products for these populations, and weaken the accuracy of injury thresholds derived. To resolve these issues, in this study, we aim to develop a race specific finite element model to simulate the impact response of the lower extremity of a 50th percentile East Asian (Chinese) male. The model was built based on medical images for the leg of an average size Chinese male and slightly adjusted based on the statistical data. The model includes detailed anatomic features and is able to simulate the muscle active force. Thirteen biomechanical tests available in the literature were used to validate its biofidelity. Using the validated model, a pedestrian-car impact accident taking place in China was re-constructed computationally. The results show that the newly developed lower leg model has a good performance in predicting dynamic response and tibia fracture pattern. An additional comparison on the fracture tolerance of the East Asian and Euro-American lower limb suggests that the current injury criterion underestimates the degree of injury of East Asian human body.

Keywords: lower limb, East Asian body characteristics, traffic accident reconstruction, finite element analysis, injury tolerance

Procedia PDF Downloads 289
689 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 60
688 Simulation and Characterization of Compact Magnetic Proton Recoil Spectrometer for Fast Neutron Spectra Measurements

Authors: Xingyu Peng, Qingyuan Hu, Xuebin Zhu, Xi Yuan

Abstract:

Neutron spectrometry has contributed much to the development of nuclear physics since 1932 and has also become an importance tool in several other fields, notably nuclear technology, fusion plasma diagnostics and radiation protection. Compared with neutron fluxes, neutron spectra can provide more detailed information on the internal physical process of neutron sources, such as fast neutron reactors, fusion plasma, fission-fusion hybrid reactors, and so on. However, high performance neutron spectrometer is not so commonly available as it requires the use of large and complex instrumentation. This work describes the development and characterization of a compact magnetic proton recoil (MPR) spectrometer for high-resolution measurements of fast neutron spectra. The compact MPR spectrometer is featured by its large recoil angle, small size permanent analysis magnet, short beam transport line and dual-purpose detector array for both steady state and pulsed neutron spectra measurement. A 3-dimensional electromagnetic particle transport code is developed to simulate the response function of the spectrometer. Simulation results illustrate that the performance of the spectrometer is mainly determined by n-p recoil foil and proton apertures, and an overall energy resolution of 3% is achieved for 14 MeV neutrons. Dedicated experiments using alpha source and mono-energetic neutron beam are employed to verify the simulated response function of the compact MPR spectrometer. These experimental results show a good agreement with the simulated ones, which indicates that the simulation code possesses good accuracy and reliability. The compact MPR spectrometer described in this work is a valuable tool for fast neutron spectra measurements for the fission or fusion devices.

Keywords: neutron spectrometry, magnetic proton recoil spectrometer, neutron spectra, fast neutron

Procedia PDF Downloads 202
687 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves

Authors: Aymen Laadhari

Abstract:

During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.

Keywords: eulerian, level set, newton, valve

Procedia PDF Downloads 278
686 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 209
685 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: additive manufacturing, carbon fiber, composite tooling, molds

Procedia PDF Downloads 199
684 Computational Fluid Dynamics Modeling of Physical Mass Transfer of CO₂ by N₂O Analogy Using One Fluid Formulation in OpenFOAM

Authors: Phanindra Prasad Thummala, Umran Tezcan Un, Ahmet Ozan Celik

Abstract:

Removal of CO₂ by MEA (monoethanolamine) in structured packing columns depends highly on the gas-liquid interfacial area and film thickness (liquid load). CFD (computational fluid dynamics) is used to find the interfacial area, film thickness and their impact on mass transfer in gas-liquid flow effectively in any column geometry. In general modeling approaches used in CFD derive mass transfer parameters from standard correlations based on penetration or surface renewal theories. In order to avoid the effect of assumptions involved in deriving the correlations and model the mass transfer based solely on fluid properties, state of art approaches like one fluid formulation is useful. In this work, the one fluid formulation was implemented and evaluated for modeling the physical mass transfer of CO₂ by N₂O analogy in OpenFOAM CFD software. N₂O analogy avoids the effect of chemical reactions on absorption and allows studying the amount of CO₂ physical mass transfer possible in a given geometry. The computational domain in the current study was a flat plate with gas and liquid flowing in the countercurrent direction. The effect of operating parameters such as flow rate, the concentration of MEA and angle of inclination on the physical mass transfer is studied in detail. Liquid side mass transfer coefficients obtained by simulations are compared to the correlations available in the literature and it was found that the one fluid formulation was effectively capturing the effects of interface surface instabilities on mass transfer coefficient with higher accuracy. The high mesh refinement near the interface region was found as a limiting reason for utilizing this approach on large-scale simulations. Overall, the one fluid formulation is found more promising for CFD studies involving the CO₂ mass transfer.

Keywords: one fluid formulation, CO₂ absorption, liquid mass transfer coefficient, OpenFOAM, N₂O analogy

Procedia PDF Downloads 220
683 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 390
682 Floor Response Spectra of RC Frames: Influence of the Infills on the Seismic Demand on Non-Structural Components

Authors: Gianni Blasi, Daniele Perrone, Maria Antonietta Aiello

Abstract:

The seismic vulnerability of non-structural components is nowadays recognized to be a key issue in performance-based earthquake engineering. Recent loss estimation studies, as well as the damage observed during past earthquakes, evidenced how non-structural damage represents the highest rate of economic loss in a building and can be in many cases crucial in a life-safety view during the post-earthquake emergency. The procedures developed to evaluate the seismic demand on non-structural components have been constantly improved and recent studies demonstrated how the existing formulations provided by main Standards generally ignore features which have a sensible influence on the definition of the seismic acceleration/displacements subjecting non-structural components. Since the influence of the infills on the dynamic behaviour of RC structures has already been evidenced by many authors, it is worth to be noted that the evaluation of the seismic demand on non-structural components should consider the presence of the infills as well as their mechanical properties. This study focuses on the evaluation of time-history floor acceleration in RC buildings; which is a useful mean to perform seismic vulnerability analyses of non-structural components through the well-known cascade method. Dynamic analyses are performed on an 8-storey RC frame, taking into account the presence of the infills; the influence of the elastic modulus of the panel on the results is investigated as well as the presence of openings. Floor accelerations obtained from the analyses are used to evaluate the floor response spectra, in order to define the demand on non-structural components depending on the properties of the infills. Finally, the results are compared with formulations provided by main International Standards, in order to assess the accuracy and eventually define the improvements required according to the results of the present research work.

Keywords: floor spectra, infilled RC frames, non-structural components, seismic demand

Procedia PDF Downloads 326
681 Locating Potential Site for Biomass Power Plant Development in Central Luzon Philippines Using GIS-Based Suitability Analysis

Authors: Bryan M. Baltazar, Marjorie V. Remolador, Klathea H. Sevilla, Imee Saladaga, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang

Abstract:

Biomass energy is a traditional source of sustainable energy, which has been widely used in developing countries. The Philippines, specifically Central Luzon, has an abundant source of biomass. Hence, it could supply abundant agricultural residues (rice husks), as feedstock in a biomass power plant. However, locating a potential site for biomass development is a complex process which involves different factors, such as physical, environmental, socio-economic, and risks that are usually diverse and conflicting. Moreover, biomass distribution is highly dispersed geographically. Thus, this study develops an integrated method combining Geographical Information Systems (GIS) and methods for energy planning; Multi-Criteria Decision Analysis (MCDA) and Analytical Hierarchy Process (AHP), for locating suitable site for biomass power plant development in Central Luzon, Philippines by considering different constraints and factors. Using MCDA, a three level hierarchy of factors and constraints was produced, with corresponding weights determined by experts by using AHP. Applying the results, a suitability map for Biomass power plant development in Central Luzon was generated. It showed that the central part of the region has the highest potential for biomass power plant development. It is because of the characteristics of the area such as the abundance of rice fields, with generally flat land surfaces, accessible roads and grid networks, and low risks to flooding and landslide. This study recommends the use of higher accuracy resource maps, and further analysis in selecting the optimum site for biomass power plant development that would account for the cost and transportation of biomass residues.

Keywords: analytic hierarchy process, biomass energy, GIS, multi-criteria decision analysis, site suitability analysis

Procedia PDF Downloads 427
680 Soluble CD36 and Cardiovascular Risk in Middle-Aged Subjects

Authors: Mohammad Alkhatatbeh, Nehad Ayoub, Nizar Mhaidat, Nesreen Saadeh, Lisa Lincz

Abstract:

CD36 is involved in the development of atherosclerosis by enhancing macrophage endocytosis of oxidized-low density lipoproteins and foam cell formation. Soluble CD36 (sCD36) was found to be elevated in type 2 diabetic patients and was supposed to act as a marker of insulin resistance and atherosclerosis. In young subjects, sCD36 was associated with cardiovascular risk factors including obesity and hypertriglyceridemia. This study was conducted to further investigate the relationship between plasma sCD36 and cardiovascular risk factors among middle-aged patients with metabolic syndrome (MetS) and healthy controls. SCD36 concentrations were determined by enzyme-linked immunosorbent assays (ELISA) for 41 patients with MetS and 36 healthy controls. Data for other variables were obtained from patients' medical records. SCD36 concentrations were relatively low compared to most other studies and were not significantly different between the MetS group and controls (P-value=0.17). SCD36 was also not correlated with age, body mass index, glucose, lipid profile, serum electrolytes and blood counts. SCD36 was not significantly different between subjects with obesity, hyperglycemia, dyslipidemia, hypertension or cardiovascular disease and those without these abnormalities (P-value > 0.05). The inconsistency between results reported in this study and other studies may be unique to the study population or be a result of the lack of a reliable standardized method for determining absolute sCD36 concentrations. However, further investigations are required to assess CD36 tissue expression in the study population and to assess the accuracy of various commercially available sCD36 ELISA kits. Thus, the availability of a standardized simple sCD36 ELISA that could be performed in any basic laboratory would be more favorable to the specialized flow cytometry methods that detect CD36+ microparticles if it was to be used as a biomarker.

Keywords: metabolic syndrome, CD36, cardiovascular risk, obesity, type 2 diabetes mellitus

Procedia PDF Downloads 266
679 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 139
678 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 177
677 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 32
676 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 299
675 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition

Procedia PDF Downloads 156
674 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter

Authors: Harish Aryal

Abstract:

The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.

Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters

Procedia PDF Downloads 77
673 Measuring the Unmeasurable: A Project of High Risk Families Prediction and Management

Authors: Peifang Hsieh

Abstract:

The prevention of child abuse has aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported child abuse cases that doubled over the past decade and the scarcity of social workers. New Taipei city, with the most population in Taiwan and over 70% of its 4 million citizens are migrant families in which the needs of children can be easily neglected due to insufficient support from relatives and communities, sees urgency for a social support system, by preemptively identifying and outreaching high-risk families of child abuse, so as to offer timely assistance and preventive measure to safeguard the welfare of the children. Big data analysis is the inspiration. As it was clear that high-risk families of child abuse have certain characteristics in common, New Taipei city decides to consolidate detailed background information data from departments of social affairs, education, labor, and health (for example considering status of parents’ employment, health, and if they are imprisoned, fugitives or under substance abuse), to cross-reference for accurate and prompt identification of the high-risk families in need. 'The Service Center for High-Risk Families' (SCHF) was established to integrate data cross-departmentally. By utilizing the machine learning 'random forest method' to build a risk prediction model which can early detect families that may very likely to have child abuse occurrence, the SCHF marks high-risk families red, yellow, or green to indicate the urgency for intervention, so as to those families concerned can be provided timely services. The accuracy and recall rates of the above model were 80% and 65%. This prediction model can not only improve the child abuse prevention process by helping social workers differentiate the risk level of newly reported cases, which may further reduce their major workload significantly but also can be referenced for future policy-making.

Keywords: child abuse, high-risk families, big data analysis, risk prediction model

Procedia PDF Downloads 135
672 Upcoming Fight Simulation with Smart Shadow

Authors: Ramiz Kuliev, Fuad Kuliev-Smirnov

Abstract:

The 'Shadow Sparring' training exercise is widely used in the training of boxers and martial artists. The main disadvantage of the usual shadow sparring is that the trainer cannot fully control such training and evaluate its results. During the competition, the athlete, preparing for the upcoming fight, imagines the Shadow (upcoming opponent) in accordance with his own imagination. A ‘Smart-Shadow Sparring’ (SSS) is an innovative version of the ‘Shadow Sparring’. During SSS, the fighter will see the Shadow (virtual opponent that moves, defends, and punches) and understand when he misses the punches from the Shadow. The task of a real athlete is to spar with a virtual one, move around, punch in the direction of unprotected areas of the Shadow and dodge his punches. Moves and punches of Shadow are set up before each training. The system will give the coach full information about virtual sparring: (i) how many and what type of punches has the fighter landed, (ii) accuracy of these punches, (iii) how many and what type of virtual punches (punches of Smart-Shadow) has the fighter missed, etc. SSS will be recorded as animated fighting of two fighters and will help the coach to analyze past training. SSS can be configured to fit the physical and technical characteristics of the next real opponent (size, techniques, speed, missed and landed punches, etc.). This will allow to simulate and rehearse the upcoming fight and improve readiness for the next opponent. For amateur fighters, SSS will be reconfigured several times during a tournament, when the real opponent becomes known. SSS can be used in three versions: (1) Digital Shadow: the athlete will see a Shadow on a monitor (2) VR-Shadow: the athlete will see a Shadow in a VR-glasses (3) Smart Shadow: a Shadow will be controlled by artificial intelligence. These technologies are based on the ‘semi-real simulation’ method. The technology allows coaches to train athletes remotely. Simulation of different opponents will help the athletes better prepare for competition. Repeat rehearsals of the upcoming fight will help improve results. SSS can improve results in Boxing, Taekwondo, Karate, and Fencing. 41 sets of medals will be awarded in these sports at the 2020 Olympic Games.

Keywords: boxing, combat sports, fight simulation, shadow sparring

Procedia PDF Downloads 132
671 The Effect of Visual Access to Greenspace and Urban Space on a False Memory Learning Task

Authors: Bryony Pound

Abstract:

This study investigated how views of green or urban space affect learning performance. It provides evidence of the value of visual access to greenspace in work and learning environments, and builds on the extensive research into the cognitive and learning-related benefits of access to green and natural spaces, particularly in learning environments. It demonstrates that benefits of visual access to natural spaces whilst learning can produce statistically significant faster responses than those facing urban views after only 5 minutes. The primary hypothesis of this research was that a greenspace view would improve short-term learning. Participants were randomly assigned to either a view of parkland or of urban buildings from the same room. They completed a psychological test of two stages. The first stage consisted of a presentation of words from eight different categories (four manmade and four natural). Following this a 2.5 minute break was given; participants were not prompted to look out of the window, but all were observed doing so. The second stage of the test involved a word recognition/false memory test of three types. Type 1 was presented words from each category; Type 2 was non-presented words from those same categories; and Type 3 was non-presented words from different categories. Participants were asked to respond with whether they thought they had seen the words before or not. Accuracy of responses and reaction times were recorded. The key finding was that reaction times for Type 2 words (highest difficulty) were significantly different between urban and green view conditions. Those with an urban view had slower reaction times for these words, so a view of greenspace resulted in better information retrieval for word and false memory recognition. Importantly, this difference was found after only 5 minutes of exposure to either view, during winter, and with a sample size of only 26. Greenspace views improve performance in a learning task. This provides a case for better visual access to greenspace in work and learning environments.

Keywords: benefits, greenspace, learning, restoration

Procedia PDF Downloads 127
670 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 152
669 Performance Based Seismic Retrofit of Masonry Infiled Reinforced Concrete Frames Using Passive Energy Dissipation Devices

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.

Keywords: energy methods, masonry infilled frame, near-field earthquakes, seismic protection, supplemental damping devices

Procedia PDF Downloads 298
668 An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment

Authors: Paul Lam, Kevin Wong, Chi Him Chan

Abstract:

Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening.

Keywords: artificial Intelligence, mobile learning, oral assessment, oral practice, speech-to-text function

Procedia PDF Downloads 103
667 Monitoring Deforestation Using Remote Sensing And GIS

Authors: Tejaswi Agarwal, Amritansh Agarwal

Abstract:

Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from Indian institute of remote Sensing (IIRS), Dehradoon in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud free and did not belong to dry and leafless season. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean, we have analysed the change in ground biomass. Through this paper, we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques, it is clearly shown that the total forest cover is continuously degrading and transforming into various land use/land cover category.

Keywords: remote sensing, deforestation, supervised classification, NDVI, change detection

Procedia PDF Downloads 1204
666 Numerical Studies on Bypass Thrust Augmentation Using Convective Heat Transfer in Turbofan Engine

Authors: R. Adwaith, J. Gopinath, Vasantha Kohila B., R. Chandru, Arul Prakash R.

Abstract:

The turbofan engine is a type of air breathing engine that is widely used in aircraft propulsion produces thrust mainly from the mass-flow of air bypassing the engine core. The present research has developed an effective method numerically by increasing the thrust generated from the bypass air. This thrust increase is brought about by heating the walls of the bypass valve from the combustion chamber using convective heat transfer method. It is achieved computationally by the use external heat to enhance the velocity of bypass air of turbofan engines. The bypass valves are either heated externally using multicell tube resistor which convert electricity generated by dynamos into heat or heat is transferred from the combustion chamber. This increases the temperature of the flow in the valves and thereby increase the velocity of the flow that enters the nozzle of the engine. As a result, mass-flow of air passing the core engine for producing more thrust can be significantly reduced thereby saving considerable amount of Jet fuel. Numerical analysis has been carried out on a scaled down version of a typical turbofan bypass valve, where the valve wall temperature has been increased to 700 Kelvin. It is observed from the analysis that, the exit velocity contributing to thrust has significantly increased by 10 % due to the heating of by-pass valve. The degree of optimum increase in the temperature, and the corresponding effect in the increase of jet velocity is calculated to determine the operating temperature range for efficient increase in velocity. The technique used in the research increases the thrust by using heated by-pass air without extracting much work from the fuel and thus improve the efficiency of existing turbofan engines. Dimensional analysis has been carried to prove the accuracy of the results obtained numerically.

Keywords: turbofan engine, bypass valve, multi-cell tube, convective heat transfer, thrust

Procedia PDF Downloads 358
665 NDVI as a Measure of Change in Forest Biomass

Authors: Amritansh Agarwal, Tejaswi Agarwal

Abstract:

Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000 km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from USGS website in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud and aerosol free by making using of FLAASH atmospheric correction technique. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean we have analysed the change in ground biomass. Through this paper we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques it is clearly shows that the total forest cover is continuously degrading and transforming into various land use/land cover category.

Keywords: remote sensing, deforestation, supervised classification, NDVI change detection

Procedia PDF Downloads 402
664 Development of Soil Test Kits to Determine Organic Matter Available Phosphorus and Exchangeable Potassium in Thailand

Authors: Charirat Kusonwiriyawong, Supha Photichan, Wannarut Chutibutr

Abstract:

Soil test kits for rapid analysis of the organic matter, available phosphorus and exchangeable potassium were developed to drive a low-cost field testing kit to farmers. The objective was to provide a decision tool for improving soil fertility. One aspect of soil test kit development was ease of use which is a time requirement for completing organic matter, available phosphorus and exchangeable potassium test in one soil sample. This testing kit required only two extractions and utilized no filtration consuming approximately 15 minutes per sample. Organic matter was principally created by oxidizing carbon KMnO₄ using the standard color chart. In addition, modified single extractant (Mehlich I) was applied to extract available phosphorus and exchangeable potassium. Molybdenum blue method and turbidimetric method using standard color chart were adapted to analyze available phosphorus and exchangeable potassium, respectively. Modified single extractant using in soil test kits were highly significant matching with analytical laboratory results (r=0.959** and 0.945** for available phosphorus and exchangeable potassium, respectively). Linear regressions were statistically calculated between modified single extractant and standard laboratory analysis (y=0.9581x-12.973 for available phosphorus and y=0.5372x+15.283 for exchangeable potassium, respectively). These equations were calibrated to formulate a fertilizer rate recommendation for specific corps. To validate quality, soil test kits were distributed to farmers and extension workers. We found that the accuracy of soil test kits were 71.0%, 63.9% and 65.5% for organic matter, available phosphorus, and exchangeable potassium, respectively. The quantitative survey was also conducted in order to assess their satisfaction with soil test kits. The survey showed that more than 85% of respondents said these testing kits were more convenient, economical and reliable than the other commercial soil test kits. Based upon the finding of this study, soil test kits can be another alternative for providing soil analysis and fertility recommendations when a soil testing laboratory is not available.

Keywords: available phosphorus, exchangeable potassium, modified single extractant, organic matter, soil test kits

Procedia PDF Downloads 146
663 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection

Authors: Amir Shahab Shahabi, Mohsen Hasirian

Abstract:

Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.

Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks

Procedia PDF Downloads 14