Search results for: cable splicing machine
228 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 122227 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute
Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane
Abstract:
Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis
Procedia PDF Downloads 123226 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement
Authors: Chrysoula Bousiouta
Abstract:
Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology
Procedia PDF Downloads 250225 Part Variation Simulations: An Industrial Case Study with an Experimental Validation
Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru
Abstract:
Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation
Procedia PDF Downloads 178224 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network
Authors: Gulfam Haider, sana danish
Abstract:
Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent
Procedia PDF Downloads 125223 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)
Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier
Abstract:
The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance
Procedia PDF Downloads 156222 New Suspension Mechanism for a Formula Car using Camber Thrust
Authors: Shinji Kajiwara
Abstract:
The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.Keywords: automobile, camber thrust, cornering force, suspension
Procedia PDF Downloads 323221 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 170220 Wave Powered Airlift PUMP for Primarily Artificial Upwelling
Authors: Bruno Cossu, Elio Carlo
Abstract:
The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter
Procedia PDF Downloads 148219 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 285218 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 79217 Role of Grey Scale Ultrasound Including Elastography in Grading the Severity of Carpal Tunnel Syndrome - A Comparative Cross-sectional Study
Authors: Arjun Prakash, Vinutha H., Karthik N.
Abstract:
BACKGROUND: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy with an estimated prevalence of 0.6 - 5.8% in the general adult population. It is caused by compression of the Median Nerve (MN) at the wrist as it passes through a narrow osteofibrous canal. Presently, the diagnosis is established by the clinical symptoms and physical examination and Nerve conduction study (NCS) is used to assess its severity. However, it is considered to be painful, time consuming and expensive, with a false-negative rate between 16 - 34%. Ultrasonography (USG) is now increasingly used as a diagnostic tool in CTS due to its non-invasive nature, increased accessibility and relatively low cost. Elastography is a newer modality in USG which helps to assess stiffness of tissues. However, there is limited available literature about its applications in peripheral nerves. OBJECTIVES: Our objectives were to measure the Cross-Sectional Area (CSA) and elasticity of MN at the carpal tunnel using Grey scale Ultrasonography (USG), Strain Elastography (SE) and Shear Wave Elastography (SWE). We also made an attempt to independently evaluate the role of Gray scale USG, SE and SWE in grading the severity of CTS, keeping NCS as the gold standard. MATERIALS AND METHODS: After approval from the Institutional Ethics Review Board, we conducted a comparative cross sectional study for a period of 18 months. The participants were divided into two groups. Group A consisted of 54 patients with clinically diagnosed CTS who underwent NCS, and Group B consisted of 50 controls without any clinical symptoms of CTS. All Ultrasound examinations were performed on SAMSUNG RS 80 EVO Ultrasound machine with 2 - 9 Mega Hertz linear probe. In both groups, CSA of the MN was measured on Grey scale USG, and its elasticity was measured at the carpal tunnel (in terms of Strain ratio and Shear Modulus). The variables were compared between both groups by using ‘Independent t test’, and subgroup analyses were performed using one-way analysis of variance. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each variable. RESULTS: The mean CSA of the MN was 13.60 + 3.201 mm2 and 9.17 + 1.665 mm2 in Group A and Group B, respectively (p < 0.001). The mean SWE was 30.65 + 12.996 kPa and 17.33 + 2.919 kPa in Group A and Group B, respectively (p < 0.001), and the mean Strain ratio was 7.545 + 2.017 and 5.802 + 1.153 in Group A and Group B respectively (p < 0.001). CONCLUSION: The combined use of Gray scale USG, SE and SWE is extremely useful in grading the severity of CTS and can be used as a painless and cost-effective alternative to NCS. Early diagnosis and grading of CTS and effective treatment is essential to avoid permanent nerve damage and functional disability.Keywords: carpal tunnel, ultrasound, elastography, nerve conduction study
Procedia PDF Downloads 101216 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 99215 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 34214 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178213 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate
Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi
Abstract:
Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI
Procedia PDF Downloads 323212 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 166211 Gear Fault Diagnosis Based on Optimal Morlet Wavelet Filter and Autocorrelation Enhancement
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Condition monitoring is used to increase machinery availability and machinery performance, whilst reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient condition monitoring system provides early warning of faults by predicting them at an early stage. When a localized fault occurs in gears, the vibration signals always exhibit non-stationary behavior. The periodic impulsive feature of the vibration signal appears in the time domain and the corresponding gear mesh frequency (GMF) emerges in the frequency domain. However, one limitation of frequency-domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur. Particularly at the early stage of gear failure, the GMF contains very little energy and is often overwhelmed by noise and higher-level macro-structural vibrations. An effective signal processing method would be necessary to remove such corrupting noise and interference. In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented. First, to eliminate the frequency associated with interferential vibrations, the vibration signal is filtered with a band-pass filter determined by a Morlet wavelet whose parameters are selected or optimized based on maximum Kurtosis. Then, to further reduce the residual in-band noise and highlight the periodic impulsive feature, an autocorrelation enhancement algorithm is applied to the filtered signal. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers induce a load on the output joint shaft flanges. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. The gearbox used for experimental measurements is of the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive: a five-speed gearbox with final drive gear and front wheel differential. The results obtained from practical experiments prove that the proposed method is very effective for gear fault diagnosis.Keywords: wavelet analysis, pitted gear, autocorrelation, gear fault diagnosis
Procedia PDF Downloads 388210 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 148209 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 136208 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Clement Yeboah, Eva Laryea
Abstract:
A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety
Procedia PDF Downloads 77207 Analytical Validity Of A Tech Transfer Solution To Internalize Genetic Testing
Authors: Lesley Northrop, Justin DeGrazia, Jessica Greenwood
Abstract:
ASPIRA Labs now offers an en-suit and ready-to-implement technology transfer solution to enable labs and hospitals that lack the resources to build it themselves to offer in-house genetic testing. This unique platform employs a patented Molecular Inversion Probe (MIP) technology that combines the specificity of a hybrid capture protocol with the ease of an amplicon-based protocol and utilizes an advanced bioinformatics analysis pipeline based on machine learning. To demonstrate its efficacy, two independent genetic tests were validated on this technology transfer platform: expanded carrier screening (ECS) and hereditary cancer testing (HC). The analytical performance of ECS and HC was validated separately in a blinded manner for calling three different types of variants: SNVs, short indels (typically, <50 bp), and large indels/CNVs defined as multi-exonic del/dup events. The reference set was constructed using samples from Coriell Institute, an external clinical genetic testing laboratory, Maine Molecular Quality Controls Inc. (MMQCI), SeraCare and GIAB Consortium. Overall, the analytical performance showed a sensitivity and specificity of >99.4% for both ECS and HC in detecting SNVs. For indels, both tests reported specificity of 100%, and ECS demonstrated a sensitivity of 100%, whereas HC exhibited a sensitivity of 96.5%. The bioinformatics pipeline also correctly called all reference CNV events resulting in a sensitivity of 100% for both tests. No additional calls were made in the HC panel, leading to a perfect performance (specificity and F-measure of 100%). In the carrier panel, however, three additional positive calls were made outside the reference set. Two of these calls were confirmed using an orthogonal method and were re-classified as true positives leaving only one false positive. The pipeline also correctly identified all challenging carrier statuses, such as positive cases for spinal muscular atrophy and alpha-thalassemia, resulting in 100% sensitivity. After confirmation of additional positive calls via long-range PCR and MLPA, specificity for such cases was estimated at 99%. These performance metrics demonstrate that this tech-transfer solution can be confidently internalized by clinical labs and hospitals to offer mainstream ECS and HC as part of their test catalog, substantially increasing access to quality germline genetic testing for labs of all sizes and resources levels.Keywords: clinical genetics, genetic testing, molecular genetics, technology transfer
Procedia PDF Downloads 178206 Secondary Prisonization and Mental Health: A Comparative Study with Elderly Parents of Prisoners Incarcerated in Remote Jails
Authors: Luixa Reizabal, Inaki Garcia, Eneko Sansinenea, Ainize Sarrionandia, Karmele Lopez De Ipina, Elsa Fernandez
Abstract:
Although the effects of incarceration in prisons close to prisoners’ and their families’ residences have been studied, little is known about the effects of remote incarceration. The present study shows the impact of secondary prisonization on mental health of elderly parents of Basque prisoners who are incarcerated in prisons located far away from prisoners’ and their families’ residences. Secondary prisonization refers to the effects that imprisonment of a family member has on relatives. In the study, psychological effects are analyzed by means of comparative methodology. Specifically, levels of psychopathology (depression, anxiety, and stress) and positive mental health (psychological, social, and emotional well-being) are studied in a sample of parents over 65 years old of prisoners incarcerated in prisons located a long distance away (concretely, some of them in a distance of less than 400 km, while others farther than 400 km) from the Basque Country. The dataset consists of data collected through a questionnaire and from a spontaneous speech recording. The statistical and automatic analyses show that levels of psychopathology and positive mental health of elderly parents of prisoners incarcerated in remote jails are affected by the incarceration of their sons or daughters. Concretely, these parents show higher levels of depression, anxiety, and stress and lower levels of emotional (but not psychological or social) wellbeing than parents with no imprisoned daughters or sons. These findings suggest that parents with imprisoned sons or daughters suffer the impact of secondary prisonization on their mental health. When comparing parents with sons or daughters incarcerated within 400 kilometers from home and parents whose sons or daughters are incarcerated farther than 400 kilometers from home, the latter present higher levels of psychopathology, but also higher levels of positive mental health (although the difference between the two groups is not statistically significant). These findings might be explained by resilience. In fact, in traumatic situations, people can develop a force to cope with the situation, and even present a posttraumatic growth. Bearing in mind all these findings, it could be concluded that secondary prisonization implies for elderly parents with sons or daughters incarcerated in remote jails suffering and, in consequence, that changes in the penitentiary policy applied to Basque prisoners are required in order to finish this suffering.Keywords: automatic spontaneous speech analysis, elderly parents, machine learning, positive mental health, psychopathology, remote incarceration, secondary prisonization
Procedia PDF Downloads 287205 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application
Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal
Abstract:
This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism
Procedia PDF Downloads 137204 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 14203 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority
Authors: Vahid Asadzadeh, Amin Ataee
Abstract:
Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau and Mouffe had in mind) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media
Procedia PDF Downloads 145202 Enabling Self-Care and Shared Decision Making for People Living with Dementia
Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan
Abstract:
People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.Keywords: care goals, decision-making, dementia, self-care, sensors
Procedia PDF Downloads 170201 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 74200 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 230199 Ergonomic Assessment of Workplace Environment of Flour Mill Workers
Authors: Jayshree P. Zend, Ashatai B. Pawar
Abstract:
The study was carried out in Parbhani district of Maharashtra state, India with the objectives to study environmental problems faced by flour mill workers, prevalence of work-related health hazards and the physiological cost of workers while performing work in flour mill in traditional method as well as improved method. The use of flour presser, dust controlling bag and noise and dust controlling mask developed by AICRP College of Home Science, VNMKV, Parbhani was considered as an improved method. This investigation consisted survey and experiment which was conducted in the respective locations of flour mills. Healthy, non-smoking 30 flour mill workers ranged between the age group of 20-50 yrs comprising 16 female and 14 male working at flour mill for 4-8 hrs/ day and 6 days/ week and had minimum five years experience of work in flour mill were selected for the study. Pulmonary function test of flour mill workers was carried out by trained technician at Dr. ShankarraoChavan Government Medical College, Nanded by using Electronic Spirometer. The data regarding heart rate (resting, working and recovery), energy expenditure, musculoskeletal problems and occupational health hazards and accidents were recorded by using pretested questionnaire. Scientific equipment used in the experiment were polar sport test heart rate monitor, Hygrometer, Goniometer, Dialed Thermometer, Sound Level Meter, Lux Meter, Ambient Air Sampler and Air Quality Monitor. The collected data were subjected to appropriate statistical analysis such as 't' test and correlation coefficient test. Results indicated that improved method i.e. use of noise and dust controlling mask, flour presser and dust controlling bag were effective in reducing physiological cost of work of flour mill workers. Lung function test of flour mill workers showed decreased values of all parameters, hence the results of present study support paying attention to use of personal protective noise and dust controlling mask by flour mill workers and also to the working conditions in flour mill especially ventilation and illumination level needs to be enhanced in flour mill. The study also emphasizes the need to develop some mechanism for lifting load of grains and unloading in the hopper. It is also suggested that the flour mill workers should use flour presser suitable to their height to avoid frequent bending and should use dust controlling bag to flour outlet of machine to reduce inhalable flour dust level in the flour mill.Keywords: physiological cost, energy expenditure, musculoskeletal problems
Procedia PDF Downloads 401