Search results for: S/R machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2862

Search results for: S/R machine

72 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 72
71 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft

Authors: Saurabh Sharma

Abstract:

Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.

Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete

Procedia PDF Downloads 67
70 High-Resolution Facial Electromyography in Freely Behaving Humans

Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein

Abstract:

Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.

Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes

Procedia PDF Downloads 247
69 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach

Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.

Abstract:

Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.

Keywords: cowpea, climate-smart, grain sorghum, intercropping

Procedia PDF Downloads 225
68 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 166
67 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites

Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira

Abstract:

The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.

Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites

Procedia PDF Downloads 187
66 Cicadas: A Clinician-assisted, Closed-loop Technology, Mobile App for Adolescents with Autism Spectrum Disorders

Authors: Bruno Biagianti, Angela Tseng, Kathy Wannaviroj, Allison Corlett, Megan DuBois, Kyu Lee, Suma Jacob

Abstract:

Background: ASD is characterized by pervasive Sensory Processing Abnormalities (SPA) and social cognitive deficits that persist throughout the course of the illness and have been linked to functional abnormalities in specific neural systems that underlie the perception, processing, and representation of sensory information. SPA and social cognitive deficits are associated with difficulties in interpersonal relationships, poor development of social skills, reduced social interactions and lower academic performance. Importantly, they can hamper the effects of established evidence-based psychological treatments—including PEERS (Program for the Education and Enrichment of Relationship Skills), a parent/caregiver-assisted, 16-weeks social skills intervention—which nonetheless requires a functional brain capable of assimilating and retaining information and skills. As a matter of fact, some adolescents benefit from PEERS more than others, calling for strategies to increase treatment response rates. Objective: We will present interim data on CICADAS (Care Improving Cognition for ADolescents on the Autism Spectrum)—a clinician-assisted, closed-loop technology mobile application for adolescents with ASD. Via ten mobile assessments, CICADAS captures data on sensory processing abnormalities and associated cognitive deficits. These data populate a machine learning algorithm that tailors the delivery of ten neuroplasticity-based social cognitive training (NB-SCT) exercises targeting sensory processing abnormalities. Methods: In collaboration with the Autism Spectrum and Neurodevelopmental Disorders Clinic at the University of Minnesota, we conducted a fully remote, three-arm, randomized crossover trial with adolescents with ASD to document the acceptability of CICADAS and evaluate its potential as a stand-alone treatment or as a treatment enhancer of PEERS. Twenty-four adolescents with ASD (ages 11-18) have been initially randomized to 16 weeks of PEERS + CICADAS (Arm A) vs. 16 weeks of PEERS + computer games vs. 16 weeks of CICADAS alone (Arm C). After 16 weeks, the full battery of assessments has been remotely administered. Results: We have evaluated the acceptability of CICADAS by examining adherence rates, engagement patterns, and exit survey data. We found that: 1) CICADAS is able to serve as a treatment enhancer for PEERS, inducing greater improvements in sensory processing, cognition, symptom reduction, social skills and behaviors, as well as the quality of life compared to computer games; 2) the concurrent delivery of PEERS and CICADAS induces greater improvements in study outcomes compared to CICADAS only. Conclusion: While preliminary, our results indicate that the individualized assessment and treatment approach designed in CICADAS seems effective in inducing adaptive long-term learning about social-emotional events. CICADAS-induced enhancement of processing and cognition facilitates the application of PEERS skills in the environment of adolescents with ASD, thus improving their real-world functioning.

Keywords: ASD, social skills, cognitive training, mobile app

Procedia PDF Downloads 215
65 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites

Authors: Mohammad M. Khan, Pankaj Agarwal

Abstract:

The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.

Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM

Procedia PDF Downloads 152
64 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 291
63 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza

Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue

Abstract:

Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.

Keywords: COVID-19, Fastai, influenza, transfer network

Procedia PDF Downloads 144
62 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 266
61 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin

Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes

Abstract:

Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.

Keywords: vildagliptin, spray congealing, microparticles, controlled release

Procedia PDF Downloads 122
60 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 136
59 Unmasking Virtual Empathy: A Philosophical Examination of AI-Mediated Emotional Practices in Healthcare

Authors: Eliana Bergamin

Abstract:

This philosophical inquiry, influenced by the seminal works of Annemarie Mol and Jeannette Pols, critically examines the transformative impact of artificial intelligence (AI) on emotional caregiving practices within virtual healthcare. Rooted in the traditions of philosophy of care, philosophy of emotions, and applied philosophy, this study seeks to unravel nuanced shifts in the moral and emotional fabric of healthcare mediated by AI-powered technologies. Departing from traditional empirical studies, the approach embraces the foundational principles of care ethics and phenomenology, offering a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. At its core, this research addresses the introduction of AI-powered technologies mediating emotional and care practices in the healthcare sector. By drawing on Mol and Pols' insights, the study offers a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. Anchored in ethnographic research within a pioneering private healthcare company in the Netherlands, this critical philosophical inquiry provides a unique lens into the dynamics of AI-mediated emotional practices. The study employs in-depth, semi-structured interviews with virtual caregivers and care receivers alongside ongoing ethnographic observations spanning approximately two and a half months. Delving into the lived experiences of those at the forefront of this technological evolution, the research aims to unravel subtle shifts in the emotional and moral landscape of healthcare, critically examining the implications of AI in reshaping the philosophy of care and human connection in virtual healthcare. Inspired by Mol and Pols' relational approach, the study prioritizes the lived experiences of individuals within the virtual healthcare landscape, offering a deeper understanding of the intertwining of technology, emotions, and the philosophy of care. In the realm of philosophy of care, the research elucidates how virtual tools, particularly those driven by AI, mediate emotions such as empathy, sympathy, and compassion—the bedrock of caregiving. Focusing on emotional nuances, the study contributes to the broader discourse on the ethics of care in the context of technological mediation. In the philosophy of emotions, the investigation examines how the introduction of AI alters the phenomenology of emotional experiences in caregiving. Exploring the interplay between human emotions and machine-mediated interactions, the nuanced analysis discerns implications for both caregivers and caretakers, contributing to the evolving understanding of emotional practices in a technologically mediated healthcare environment. Within applied philosophy, the study transcends empirical observations, positioning itself as a reflective exploration of the moral implications of AI in healthcare. The findings are intended to inform ethical considerations and policy formulations, bridging the gap between technological advancements and the enduring values of caregiving. In conclusion, this focused philosophical inquiry aims to provide a foundational understanding of the evolving landscape of virtual healthcare, drawing on the works of Mol and Pols to illuminate the essence of human connection, care, and empathy amid technological advancements.

Keywords: applied philosophy, artificial intelligence, healthcare, philosophy of care, philosophy of emotions

Procedia PDF Downloads 59
58 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 45
57 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 121
56 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results

Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma

Abstract:

Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.

Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis

Procedia PDF Downloads 173
55 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 172
54 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 165
53 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 280
52 The High Potential and the Little Use of Brazilian Class Actions for Prevention and Penalization Due to Workplace Accidents in Brazil

Authors: Sandra Regina Cavalcante, Rodolfo A. G. Vilela

Abstract:

Introduction: Work accidents and occupational diseases are a big problem for public health around the world and the main health problem of workers with high social and economic costs. Brazil has shown progress over the last years, with the development of the regulatory system to improve safety and quality of life in the workplace. However, the situation is far from acceptable, because the occurrences remain high and there is a great gap between legislation and reality, generated by the low level of voluntary compliance with the law. Brazilian laws provide procedural legal instruments for both, to compensate the damage caused to the worker's health and to prevent future injuries. In the Judiciary, the prevention idea is in the collective action, effected through Brazilian Class Actions. Inhibitory guardianships may impose both, improvements to the working environment, as well as determine the interruption of activity or a ban on the machine that put workers at risk. Both the Labor Prosecution and trade unions have to stand to promote this type of action, providing payment of compensation for collective moral damage. Objectives: To verify how class actions (known as ‘public civil actions’), regulated in Brazilian legal system to protect diffuse, collective and homogeneous rights, are being used to protect workers' health and safety. Methods: The author identified and evaluated decisions of Brazilian Superior Court of Labor involving collective actions and work accidents. The timeframe chosen was December 2015. The online jurisprudence database was consulted in page available for public consultation on the court website. The categorization of the data was made considering the result (court application was rejected or accepted), the request type, the amount of compensation and the author of the cause, besides knowing the reasoning used by the judges. Results: The High Court issued 21,948 decisions in December 2015, with 1448 judgments (6.6%) about work accidents and only 20 (0.09%) on collective action. After analyzing these 20 decisions, it was found that the judgments granted compensation for collective moral damage (85%) and/or obligation to make, that is, changes to improve prevention and safety (71%). The processes have been filed mainly by the Labor Prosecutor (83%), and also appeared lawsuits filed by unions (17%). The compensation for collective moral damage had average of 250,000 reais (about US$65,000), but it should be noted that there is a great range of values found, also are several situations repaired by this compensation. This is the last instance resource for this kind of lawsuit and all decisions were well founded and received partially the request made for working environment protection. Conclusions: When triggered, the labor court system provides the requested collective protection in class action. The values of convictions arbitrated in collective actions are significant and indicate that it creates social and economic repercussions, stimulating employers to improve the working environment conditions of their companies. It is necessary to intensify the use of collective actions, however, because they are more efficient for prevention than reparatory individual lawsuits, but it has been underutilized, mainly by Unions.

Keywords: Brazilian Class Action, collective action, work accident penalization, workplace accident prevention, workplace protection law

Procedia PDF Downloads 275
51 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments

Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño

Abstract:

Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.

Keywords: heat transfer, heat treatment, mango, modeling and simulation

Procedia PDF Downloads 247
50 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 184
49 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 75
48 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units

Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz

Abstract:

Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.

Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting

Procedia PDF Downloads 223
47 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 236
46 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 172
45 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 137
44 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 60
43 Social and Political Economy of Paid and Unpaid Work: Work of Women Home Based Workers in National Capital Region (NCR), India

Authors: Sudeshna Sengupta

Abstract:

Women’s work lives weave a complex fabric of myriad work relations and complex structures. Lives, when seen from the lens of work, is a saga of conjugated oppression by intertwined structures that are vertically and horizontally interwoven in a very complex manner. Women interact with multiple institutions through their work. The interactions and interplay of institutions shape their organization of work. They intersperse productive work with reproductive work, unpaid economic activities with unpaid care work, and all kinds of activities with leisure and self-care. The proposed paper intends to understand how women working as home-based workers in the National Capital Region (NCR) of India are organizing their everyday work, and how the organization of work is influenced by the interplay of structures. Situating itself in a multidisciplinary theoretical framework, this paper brings out how the gendering of work is playing out in the political, economic and social domain and shaping the work-life within the family, and in the paid workspace. The paper will use a primary data source, which is qualitative in nature. It will comprise 15 qualitative interviews of women home-based workers from the National Capital Region. The research uses a life history approach. The sampling was purposive using snowballing as a method. The dataset is part of the primary data (qualitative) collected for the ongoing Ph.D. work in Gender Studies at Ambedkar University Delhi. The home-based workers interviewed were in “non-factory” wage relations based on piece rates with flexible working hours. Their workplaces were their own homes with no spatial divide between living spaces and workspaces. Home-based workers were recognized as a group in the domain of labor economics in the 1980s. When menial work was cheaper than machine work, the capital owners preferred to outsource work as home-based work to women. These production spaces are fragmented and the identity of gender is created within labor processes to favor material accumulation. Both the employers and employees acknowledged the material gain of the capital owner when work was subcontracted to women at home. Simultaneously the market reinforced women’s reproductive role by conforming to patriarchal ideology. The contractors played an important role in implementing localized control on workers and also in finding workers for fragmented, gendered production processes. Their presence helped the employers in bringing together multiple forms of oppression that ranged from creating a structure to flout laws by creating shadow employers. It created an intertwined social and economic structure as well as a workspace where the line between productive and reproductive work gets blurred. The state invisibilized itself either by keeping the sector out of the domain of laws or by not implementing its own laws regulating working conditions or social security. It allowed the local hierarchy to function and define localized working conditions. The productive reproductive continuum reveals a labor control that influenced both the productive and reproductive work of women.

Keywords: informal sector, paid work, women workers, labor processes

Procedia PDF Downloads 163