Search results for: restructuringdigital factory model
10568 Discourses in Mother Tongue-Based Classes: The Case of Hiligaynon Language
Authors: Kayla Marie Sarte
Abstract:
This study sought to describe mother tongue-based classes in the light of classroom interactional discourse using the Sinclair and Coulthard model. It specifically identified the exchanges, grouped into Teaching and Boundary types; moves, coded as Opening, Answering and Feedback; and the occurrence of the 13 acts (Bid, Cue, Nominate, Reply, React, Acknowledge, Clue, Accept, Evaluate, Loop, Comment, Starter, Conclusion, Aside and Silent Stress) in the classroom, and determined what these reveal about the teaching and learning processes in the MTB classroom. Being a qualitative study, using the Single Collective Case Within-Site (embedded) design, varied data collection procedures such as non-participant observations, audio-recordings and transcription of MTB classes, and semi-structured interviews were utilized. The results revealed the presence of all the codes in the model (except for the silent stress) which also implied that the Hiligaynon mother tongue-based class was eclectic, cultural and communicative, and had a healthy, analytical and focused environment which aligned with the aims of MTB-MLE, and affirmed the purported benefits of mother tongue teaching. Through the study, gaps in the mother tongue teaching and learning were also identified which involved the difficulty of children in memorizing Hiligaynon terms expressed in English in their homes and in the communities.Keywords: discourse analysis, language teaching and learning, mother tongue-based education, multilingualism
Procedia PDF Downloads 26010567 A Kinetic Study on Recovery of High-Purity Rutile TiO₂ Nanoparticles from Titanium Slag Using Sulfuric Acid under Sonochemical Procedure
Authors: Alireza Bahramian
Abstract:
High-purity TiO₂ nanoparticles (NPs) with size ranging between 50 nm and 100 nm are synthesized from titanium slag through sulphate route under sonochemical procedure. The effect of dissolution parameters such as the sulfuric acid/slag weight ratio, caustic soda concentration, digestion temperature and time, and initial particle size of the dried slag on the extraction efficiency of TiO₂ and removal of iron are examined. By optimizing the digestion conditions, a rutile TiO₂ powder with surface area of 42 m²/g and mean pore diameter of 22.4 nm were prepared. A thermo-kinetic analysis showed that the digestion temperature has an important effect, while the acid/slag weight ratio and initial size of the slag has a moderate effect on the dissolution rate. The shrinking-core model including both chemical surface reaction and surface diffusion is used to describe the leaching process. A low value of activation energy, 38.12 kJ/mol, indicates the surface chemical reaction model is a rate-controlling step. The kinetic analysis suggested a first order reaction mechanism with respect to the acid concentrations.Keywords: TiO₂ nanoparticles, titanium slag, dissolution rate, sonochemical method, thermo-kinetic study
Procedia PDF Downloads 25610566 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago
Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu
Abstract:
Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago
Procedia PDF Downloads 5310565 Temporal Profile of T2 MRI and 1H-MRS in the MDX Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K.Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is an X-linked, lethal muscle wasting disease for which there are currently no treatment that effectively prevents the muscle necrosis and progressive muscle loss. DMD is among the most common of inherited diseases affecting around 1/3500 live male births. MDX (X-linked muscular dystrophy) mice only partially encapsulate the disease in humans and display weakness in muscles, muscle damage and edema during a period deemed the “critical period” when these mice go through cycles of muscular degeneration and regeneration. Although the MDX mutant mouse model has been extensively studied as a model for DMD, to-date an extensive temporal, non-invasive imaging profile that utilizes magnetic resonance imaging (MRI) and 1H-magnetic resonance spectroscopy (1H-MRS) has not been performed.. In addition, longitudinal imaging characterization has not coincided with attempts to exacerbate the progressive muscle damage by exercise. In this study we employed an 11.7 T small animal MRI in order to characterize the MRI and MRS profile of MDX mice longitudinally during a 12 month period during which MDX mice were subjected to exercise. Male mutant MDX mice (n=15) and male wild-type mice (n=15) were subjected to a chronic exercise regime of treadmill walking (30 min/ session) bi-weekly over the whole 12 month follow-up period. Mouse gastrocnemius and tibialis anterior muscles were profiled with baseline T2-MRI and 1H-MRS at 6 weeks of age. Imaging and spectroscopy was repeated again at 3 months, 6 months, 9 months and 12 months of age. Plasma creatine kinase (CK) level measurements were coincided with time-points for T2-MRI and 1H-MRS, but also after the “critical period” at 10 weeks of age. The results obtained from this study indicate that chronic exercise extends dystrophic phenotype of MDX mice as evidenced by T2-MRI and1H-MRS. T2-MRI revealed extent and location of the muscle damage in gastrocnemius and tibialis anterior muscles as hyperintensities (lesions and edema) in exercised MDX mice over follow-up period.. The magnitude of the muscle damage remained stable over time in exercised mice. No evident fat infiltration or cumulation to the muscle tissues was seen at any time-point in exercised MDX mice. Creatine, choline and taurine levels evaluated by 1H-MRS from the same muscles were found significantly decreased in each time-point, Extramyocellular (EMCL) and intramyocellular lipids (IMCL) did not change in exercised mice supporting the findings from anatomical T2-MRI scans for fat content. Creatine kinase levels were found to be significantly higher in exercised MDX mice during the follow-up period and importantly CK levels remained stable over the whole follow-up period. Taken together, we have described here longitudinal prophile for muscle damage and muscle metabolic changes in MDX mice subjected to chronic exercised. The extent of the muscle damage by T2-MRI was found to be stable through the follow-up period in muscles examined. In addition, metabolic profile, especially creatine, choline and taurine levels in muscles, was found to be sustained between time-points. The anatomical muscle damage evaluated by T2-MRI was supported by plasma CK levels which remained stable over the follow-up period. These findings show that non-invasive imaging and spectroscopy can be used effectively to evaluate chronic muscle pathology. These techniques can be also used to evaluate the effect of various manipulations, like here exercise, on the phenotype of the mice. Many of the findings we present here are translatable to clinical disease, such as decreased creatine, choline and taurine levels in muscles. Imaging by T2-MRI and 1H-MRS also revealed that fat content or extramyocellar and intramyocellular lipids, respectively, are not changed in MDX mice, which is in contrast to clinical manifestation of the Duchenne’s muscle dystrophy. Findings show that non-invasive imaging can be used to characterize the phenotype of a MDX model and its translatability to clinical disease, and to study events that have traditionally been not examined, like here rigorous exercise related sustained muscle damage after the “critical period”. The ability for this model to display sustained damage beyond the spontaneous “critical period“ and in turn to study drug effects on this extended phenotype will increase the value of the MDX mouse model as a tool to study therapies and treatments aimed at DMD and associated diseases.Keywords: 1H-MRS, MRI, muscular dystrophy, mouse model
Procedia PDF Downloads 36010564 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation
Authors: Ke He, Wumaier Parezhati, Haruka Yamashita
Abstract:
Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.Keywords: Doc2Vec, online marketplace, marketing, recommendation systems
Procedia PDF Downloads 11410563 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks
Authors: Mehrdad Shafiei Dizaji, Hoda Azari
Abstract:
The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven
Procedia PDF Downloads 4610562 'How to Change Things When Change is Hard' Motivating Libyan College Students to Play an Active Role in Their Learning Process
Authors: Hameda Suwaed
Abstract:
Group work, time management and accepting others' opinions are practices rooted in the socio-political culture of democratic nations. In Libya, a country transitioning towards democracy, what is the impact of encouraging college students to use such practices in the English language classroom? How to encourage teachers to use such practices in educational system characterized by using traditional methods of teaching? Using action research and classroom research gathered data; this study investigates how teachers can use education to change their students' understanding of their roles in their society by enhancing their belonging to it. This study adjusts a model of change that includes giving students clear directions, sufficient motivation and supportive environment. These steps were applied by encouraging students to participate actively in the classroom by using group work and variety of activities. The findings of the study showed that following the suggested model can broaden students' perception of their belonging to their environment starting with their classroom and ending with their country. In conclusion, although this was a small scale study, the students' participation in the classroom shows that they gained self confidence in using practices such as group work, how to present their ideas and accepting different opinions. What was remarkable is that most students were aware that is what we need in Libya nowadays.Keywords: educational change, students' motivation, group work, foreign language teaching
Procedia PDF Downloads 42410561 Hedonic Pricing Model of Parboiled Rice
Authors: Roengchai Tansuchat, Wassanai Wattanutchariya, Aree Wiboonpongse
Abstract:
Parboiled rice is one of the most important food grains and classified in cereal and cereal product. In 2015, parboiled rice was traded more than 14.34 % of total rice trade. The major parboiled rice export countries are Thailand and India, while many countries in Africa and the Middle East such as Nigeria, South Africa, United Arab Emirates, and Saudi Arabia, are parboiled rice import countries. In the global rice market, parboiled rice pricing differs from white rice pricing because parboiled rice is semi-processing product, (soaking, steaming and drying) which affects to their color and texture. Therefore, parboiled rice export pricing does not depend only on the trade volume, length of grain, and percentage of broken rice or purity but also depend on their rice seed attributes such as color, whiteness, consistency of color and whiteness, and their texture. In addition, the parboiled rice price may depend on the country of origin, and other attributes, such as certification mark, label, packaging, and sales locations. The objectives of this paper are to study the attributes of parboiled rice sold in different countries and to evaluate the relationship between parboiled rice price in different countries and their attributes by using hedonic pricing model. These results are useful for product development, and marketing strategies development. The 141 samples of parboiled rice were collected from 5 major parboiled rice consumption countries, namely Nigeria, South Africa, Saudi Arabia, United Arab Emirates and Spain. The physicochemical properties and optical properties, namely size and shape of seed, colour (L*, a*, and b*), parboiled rice texture (hardness, adhesiveness, cohesiveness, springiness, gumminess, and chewiness), nutrition (moisture, protein, carbohydrate, fat, and ash), amylose, package, country of origin, label are considered as explanatory variables. The results from parboiled rice analysis revealed that most of samples are classified as long grain and slender. The highest average whiteness value is the parboiled rice sold in South Africa. The amylose value analysis shows that most of parboiled rice is non-glutinous rice, classified in intermediate amylose content range, and the maximum value was found in United Arab Emirates. The hedonic pricing model showed that size and shape are the key factors to determine parboiled rice price statistically significant. In parts of colour, brightness value (L*) and red-green value (a*) are statistically significant, but the yellow-blue value (b*) is insignificant. In addition, the texture attributes that significantly affect to the parboiled rice price are hardness, adhesiveness, cohesiveness, and gumminess. The findings could help both parboiled rice miller, exporter and retailers formulate better production and marketing strategies by focusing on these attributes.Keywords: hedonic pricing model, optical properties, parboiled rice, physicochemical properties
Procedia PDF Downloads 33310560 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model
Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh
Abstract:
Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding
Procedia PDF Downloads 1910559 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 6910558 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media
Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh
Abstract:
In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid
Procedia PDF Downloads 18610557 Simultaneous Targeting of MYD88 and Nur77 as an Effective Approach for the Treatment of Inflammatory Diseases
Authors: Uzma Saqib, Mirza S. Baig
Abstract:
Myeloid differentiation primary response protein 88 (MYD88) has long been considered a central player in the inflammatory pathway. Recent studies clearly suggest that it is an important therapeutic target in inflammation. On the other hand, a recent study on the interaction between the orphan nuclear receptor (Nur77) and p38α, leading to increased lipopolysaccharide-induced hyperinflammatory response, suggests this binary complex as a therapeutic target. In this study, we have designed inhibitors that can inhibit both MYD88 and Nur77 at the same time. Since both MYD88 and Nur77 are an integral part of the pathways involving lipopolysaccharide-induced activation of NF-κB-mediated inflammation, we tried to target both proteins with the same library in order to retrieve compounds having dual inhibitory properties. To perform this, we developed a homodimeric model of MYD88 and, along with the crystal structure of Nur77, screened a virtual library of compounds from the traditional Chinese medicine database containing ~61,000 compounds. We analyzed the resulting hits for their efficacy for dual binding and probed them for developing a common pharmacophore model that could be used as a prototype to screen compound libraries as well as to guide combinatorial library design to search for ideal dual-target inhibitors. Thus, our study explores the identification of novel leads having dual inhibiting effects due to binding to both MYD88 and Nur77 targets.Keywords: drug design, Nur77, MYD88, inflammation
Procedia PDF Downloads 30610556 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement
Authors: Avdasheva Svetlanaa, Kryuchkova Polinab
Abstract:
Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia
Procedia PDF Downloads 49610555 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional
Procedia PDF Downloads 23310554 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 19110553 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour
Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani
Abstract:
In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.Keywords: video tracking, particle filter, greedy snake, neural network
Procedia PDF Downloads 34310552 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
Authors: Arian Hosseini, Mahmudul Hasan
Abstract:
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing
Procedia PDF Downloads 5810551 Autonomic Sonar Sensor Fault Manager for Mobile Robots
Authors: Martin Doran, Roy Sterritt, George Wilkie
Abstract:
NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.Keywords: autonomic, self-adaption, self-healing, self-optimization
Procedia PDF Downloads 35310550 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 17710549 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon
Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison
Abstract:
Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax
Procedia PDF Downloads 41910548 Inventory Management System of Seasonal Raw Materials of Feeds at San Jose Batangas through Integer Linear Programming and VBA
Authors: Glenda Marie D. Balitaan
Abstract:
The branch of business management that deals with inventory planning and control is known as inventory management. It comprises keeping track of supply levels and forecasting demand, as well as scheduling when and how to plan. Keeping excess inventory results in a loss of money, takes up physical space, and raises the risk of damage, spoilage, and loss. On the other hand, too little inventory frequently causes operations to be disrupted and raises the possibility of low customer satisfaction, both of which can be detrimental to a company's reputation. The United Victorious Feed mill Corporation's present inventory management practices were assessed in terms of inventory level, warehouse allocation, ordering frequency, shelf life, and production requirement. To help the company achieve their optimal level of inventory, a mathematical model was created using Integer Linear Programming. Due to the season, the goal function was to reduce the cost of purchasing US Soya and Yellow Corn. Warehouse space, annual production requirements, and shelf life were all considered. To ensure that the user only uses one application to record all relevant information, like production output and delivery, the researcher built a Visual Basic system. Additionally, the technology allows management to change the model's parameters.Keywords: inventory management, integer linear programming, inventory management system, feed mill
Procedia PDF Downloads 8510547 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 18610546 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics
Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah
Abstract:
Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics
Procedia PDF Downloads 13410545 The Effect of β-Cryptoxanthin on Testicular Ischemia-Reperfusion Injury in a Rat Model: Evidence from Testicular Histology
Authors: Kianoush Mohammadnejad, Rahim Mohammadi, Ali Soleimanzadeh, Ali Shalizar Jalai, Farshid Sareafzadeh Rezaei
Abstract:
Testicular torsion and detorsion are significant clinical issues for infertile men. Torsion of the spermatic cord is an emergency condition resulting from the rotation of the testis and epididymis around the axis of the spermatic cord. A rat testis model was used to assess the effects of β-cryptoxanthin on ischemia-reperfusion injury. Twenty healthy male Wistar rats were included and randomized into four investigational groups (n = 5): Group SHAM: In this group, midline incision of the scrotum was performed, and the testicles were taken out for 2 hours with a 720-degree rotation. Group ISCHEMIA: In this group, a midline incision of the scrotum was performed, and the testicles were taken out and underwent ischemia for 2 hours with a 720-degree rotation. Group IS/REP/Oil: In this group, a midline scrotum cut was performed the testicles were taken out, and ischemia was created for 2 hours with a 720-degree rotation and at the end of ischemia 100 µL of corn oil (β-cryptoxanthin solvent) was injected intraperitoneally. Group IS/REP/CRPTXNTN 2.5: The same as group IS/REP/Oil as well as intraperitoneal administration of 100 µL of β-cryptoxanthin (2.5 µg/kg) at the end of ischemia. In all groups, the testes were returned back to the scrotum and, after 60 days, were dissected out and removed for histopathological analyses. β-cryptoxanthin at the dose of 2.5 µg/kg significantly improved histologic indices compared to other treatment groups (p<0.05). β-cryptoxanthin could be helpful in minimizing ischemia-reperfusion injury in testicular tissue exposed to ischemia.Keywords: beta-cryptoxanthin, testis, Ischemia-reperfusion, Intraperitoneal
Procedia PDF Downloads 2310544 Numerical Analysis of CO₂ Storage as Clathrates in Depleted Natural Gas Hydrate Formation
Authors: Sheraz Ahmad, Li Yiming, Li XiangFang, Xia Wei, Zeen Chen
Abstract:
Holding CO₂ at massive scale in the enclathrated solid matter called hydrate can be perceived as one of the most reliable methods for CO₂ sequestration to take greenhouse gases emission control measures and global warming preventive actions. In this study, a dynamically coupled mass and heat transfer mathematical model is developed which elaborates the unsteady behavior of CO₂ flowing into a porous medium and converting itself into hydrates. The combined numerical model solution by implicit finite difference method is explained and through coupling the mass, momentum and heat conservation relations, an integrated model can be established to analyze the CO₂ hydrate growth within P-T equilibrium conditions. CO₂ phase transition, effect of hydrate nucleation by exothermic heat release and variations of thermo-physical properties has been studied during hydrate nucleation. The results illustrate that formation pressure distribution becomes stable at the early stage of hydrate nucleation process and always remains stable afterward, but formation temperature is unable to keep stable and varies during CO₂ injection and hydrate nucleation process. Initially, the temperature drops due to cold high-pressure CO₂ injection since when the massive hydrate growth triggers and temperature increases under the influence of exothermic heat evolution. Intermittently, it surpasses the initial formation temperature before CO₂ injection initiates. The hydrate growth rate increases by increasing injection pressure in the long formation and it also expands overall hydrate covered length in the same induction period. The results also show that the injection pressure conditions and hydrate growth rate affect other parameters like CO₂ velocity, CO₂ permeability, CO₂ density, CO₂ and H₂O saturation inside the porous medium. In order to enhance the hydrate growth rate and expand hydrate covered length, the injection temperature is reduced, but it did not give satisfactory outcomes. Hence, CO₂ injection in vacated natural gas hydrate porous sediment may form hydrate under low temperature and high-pressure conditions, but it seems very challenging on a huge scale in lengthy formations.Keywords: CO₂ hydrates, CO₂ injection, CO₂ Phase transition, CO₂ sequestration
Procedia PDF Downloads 14110543 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach
Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar
Abstract:
The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group
Procedia PDF Downloads 11810542 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru
Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama
Abstract:
There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.Keywords: water economy, simulation, modeling, integration
Procedia PDF Downloads 15610541 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period
Authors: Xu Wang
Abstract:
This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty
Procedia PDF Downloads 16810540 Teaching Techno-Criticism to Digital Natives: Participatory Journalism as Pedagogical Practice
Authors: Stephen D. Caldes
Abstract:
Teaching media and digital literacy to “digital natives” presents a unique set of pedagogical obstacles, especially when critique is involved, as these early-adopters tend to deify most technological and/or digital advancements and inventions. Knowing no other way of being, these natives are often reluctant to hear criticisms of the way they receive information, educate themselves, communicate with others, and even become enculturated because critique often connotes generational gaps and/or clandestine efforts to produce neo-Luddites. To digital natives, techno-criticism is more the result of an antiquated, out-of-touch agenda rather than a constructive, progressive praxis. However, the need to cultivate a techno-critical perspective among technology’s premier users has, perhaps, never been more pressing. In an effort to sidestep reluctance and encourage critical thought about where we are in terms of digital technology and where exactly it may be taking us, this essay outlines a new model for teaching techno-criticism to digital natives. Specifically, it recasts the techniques of participatory journalism—helping writers and readers understand subjects outside of their specific historical context—as progressive, interdisciplinary pedagogy. The model arises out of a review of relevant literature and data gathered via literary analysis and participant observation. Given the tenuous relationships between novel digital advancements, individual identity, collective engagement, and, indeed, Truth/fact, shepherding digital natives toward routine practice of “techno-realism” seems of utter importance.Keywords: digital natives, journalism education, media literacy, techno-criticism
Procedia PDF Downloads 32110539 Data-Driven Insights Into Juvenile Recidivism: Leveraging Machine Learning for Rehabilitation Strategies
Authors: Saiakhil Chilaka
Abstract:
Juvenile recidivism presents a significant challenge to the criminal justice system, impacting both the individuals involved and broader societal safety. This study aims to identify the key factors influencing recidivism and successful rehabilitation outcomes by utilizing a dataset of over 25,000 individuals from the NIJ Recidivism Challenge. We employed machine learning techniques, particularly Random Forest Classification, combined with SHAP (SHapley Additive exPlanations) for model interpretability. Our findings indicate that supervision risk score, percent days employed, and education level are critical factors affecting recidivism, with higher levels of supervision, successful employment, and education contributing to lower recidivism rates. Conversely, Gang Affiliation emerged as a significant risk factor for reoffending. The model achieved an accuracy of 68.8%, highlighting its utility in identifying high-risk individuals and informing targeted interventions. These results suggest that a comprehensive approach involving personalized supervision, vocational training, educational support, and anti-gang initiatives can significantly reduce recidivism and enhance rehabilitation outcomes for juveniles, providing critical insights for policymakers and juvenile justice practitioners.Keywords: juvenile, justice system, data analysis, SHAP
Procedia PDF Downloads 26