Search results for: utilizing quicklime
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1672

Search results for: utilizing quicklime

292 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 82
291 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 240
290 Double Wishbone Pushrod Suspension Systems Co-Simulation for Racing Applications

Authors: Suleyman Ogul Ertugrul, Mustafa Turgut, Serkan Inandı, Mustafa Gorkem Coban, Mustafa Kıgılı, Ali Mert, Oguzhan Kesmez, Murat Ozancı, Caglar Uyulan

Abstract:

In high-performance automotive engineering, the realistic simulation of suspension systems is crucial for enhancing vehicle dynamics and handling. This study focuses on the double wishbone suspension system, prevalent in racing vehicles due to its superior control and stability characteristics. Utilizing MATLAB and Adams Car simulation software, we conduct a comprehensive analysis of displacement behaviors and damper sizing under various dynamic conditions. The initial phase involves using MATLAB to simulate the entire suspension system, allowing for the preliminary determination of damper size based on the system's response under simulated conditions. Following this, manual calculations of wheel loads are performed to assess the forces acting on the front and rear suspensions during scenarios such as braking, cornering, maximum vertical loads, and acceleration. Further dynamic force analysis is carried out using MATLAB Simulink, focusing on the interactions between suspension components during key movements such as bumps and rebounds. This simulation helps in formulating precise force equations and in calculating the stiffness of the suspension springs. To enhance the accuracy of our findings, we focus on a detailed kinematic and dynamic analysis. This includes the creation of kinematic loops, derivation of relevant equations, and computation of Jacobian matrices to accurately determine damper travel and compression metrics. The calculated spring stiffness is crucial in selecting appropriate springs to ensure optimal suspension performance. To validate and refine our results, we replicate the analyses using the Adams Car software, renowned for its detailed handling of vehicular dynamics. The goal is to achieve a robust, reliable suspension setup that maximizes performance under the extreme conditions encountered in racing scenarios. This study exemplifies the integration of theoretical mechanics with advanced simulation tools to achieve a high-performance suspension setup that can significantly improve race car performance, providing a methodology that can be adapted for different types of racing vehicles.

Keywords: FSAE, suspension system, Adams Car, kinematic

Procedia PDF Downloads 46
289 Improving Low English Oral Skills of 5 Second-Year English Major Students at Debark University

Authors: Belyihun Muchie

Abstract:

This study investigates the low English oral communication skills of 5 second-year English major students at Debark University. It aims to identify the key factors contributing to their weaknesses and propose effective interventions to improve their spoken English proficiency. Mixed-methods research will be employed, utilizing observations, questionnaires, and semi-structured interviews to gather data from the participants. To clearly identify these factors, structured and informal observations will be employed; the former will be used to identify their fluency, pronunciation, vocabulary use, and grammar accuracy, and the later will be suited to observe the natural interactions and communication patterns of learners in the classroom setting. The questionnaires will assess their self-perceptions of their skills, perceived barriers to fluency, and preferred learning styles. Interviews will also delve deeper into their experiences and explore specific obstacles faced in oral communication. Data analysis will involve both quantitative and qualitative responses. The structured observation and questionnaire will be analyzed quantitatively, whereas the informal observation and interview transcripts will be analyzed thematically. Findings will be used to identify the major causes of low oral communication skills, such as limited vocabulary, grammatical errors, pronunciation difficulties, or lack of confidence. They are also helpful to develop targeted solutions addressing these causes, such as intensive pronunciation practice, conversation simulations, personalized feedback, or anxiety-reduction techniques. Finally, the findings will guide designing an intervention plan for implementation during the action research phase. The study's outcomes are expected to provide valuable insights into the challenges faced by English major students in developing oral communication skills, contribute to the development of evidence-based interventions for improving spoken English proficiency in similar contexts, and offer practical recommendations for English language instructors and curriculum developers to enhance student learning outcomes. By addressing the specific needs of these students and implementing tailored interventions, this research aims to bridge the gap between theoretical knowledge and practical speaking ability, equipping them with the confidence and skills to flourish in English communication settings.

Keywords: oral communication skills, mixed-methods, evidence-based interventions, spoken English proficiency

Procedia PDF Downloads 47
288 Best Practice for Post-Operative Surgical Site Infection Prevention

Authors: Scott Cavinder

Abstract:

Surgical site infections (SSI) are a known complication to any surgical procedure and are one of the most common nosocomial infections. Globally it is estimated 300 million surgical procedures take place annually, with an incidence of SSI’s estimated to be 11 of 100 surgical patients developing an infection within 30 days after surgery. The specific purpose of the project is to address the PICOT (Problem, Intervention, Comparison, Outcome, Time) question: In patients who have undergone cardiothoracic or vascular surgery (P), does implementation of a post-operative care bundle based on current EBP (I) as compared to current clinical agency practice standards (C) result in a decrease of SSI (O) over a 12-week period (T)? Synthesis of Supporting Evidence: A literature search of five databases, including citation chasing, was performed, which yielded fourteen pieces of evidence ranging from high to good quality. Four common themes were identified for the prevention of SSI’s including use and removal of surgical dressings; use of topical antibiotics and antiseptics; implementation of evidence-based care bundles, and implementation of surveillance through auditing and feedback. The Iowa Model was selected as the framework to help guide this project as it is a multiphase change process which encourages clinicians to recognize opportunities for improvement in healthcare practice. Practice/Implementation: The process for this project will include recruiting postsurgical participants who have undergone cardiovascular or thoracic surgery prior to discharge at a Northwest Indiana Hospital. The patients will receive education, verbal instruction, and return demonstration. The patients will be followed for 12 weeks, and wounds assessed utilizing the National Healthcare Safety Network//Centers for Disease Control (NHSN/CDC) assessment tool and compared to the SSI rate of 2021. Key stakeholders will include two cardiovascular surgeons, four physician assistants, two advance practice nurses, medical assistant and patients. Method of Evaluation: Chi Square analysis will be utilized to establish statistical significance and similarities between the two groups. Main Results/Outcomes: The proposed outcome is the prevention of SSIs in the post-op cardiothoracic and vascular patient. Implication/Recommendation(s): Implementation of standardized post operative care bundles in the prevention of SSI in cardiovascular and thoracic surgical patients.

Keywords: cardiovascular, evidence based practice, infection, post-operative, prevention, thoracic, surgery

Procedia PDF Downloads 77
287 Determinants of Youth Engagement with Health Information on Social Media Platforms in United Arab Emirates

Authors: Niyi Awofeso, Yunes Gaber, Moyosola Bamidele

Abstract:

Since most social media platforms are accessible anytime and anywhere where Internet connections and smartphones are available, the invisibility of the reader raises questions about accuracy, appropriateness and comprehensibility of social media communication. Furthermore, the identity and motives of individuals and organizations who post articles on social media sites are not always transparent. In the health sector, through socially networked platforms constitute a common source of health-related information, given their purported wealth of information. Nevertheless, fake blogs and sponsored postings for marketing 'natural cures' pervade most commonly used social media platforms, thus complicating readers’ abilities to access and understand trustworthy health-related information. This purposive sampling study of 120 participants aged 18-35 year in UAE was conducted between September and December 2017, and explored commonly used social media platforms, frequency of use of social media for accessing health related information, and approaches for assessing the trustworthiness of health information on social media platforms. Results indicate that WhatsApp (95%), Instagram (87%) and Youtube (82%) were the most commonly used social media platforms among respondents. Majority of respondents (81%) indicated that they regularly access social media to get health-associated information. More than half of respondents (55%) with non-chronic health status relied on unsolicited messages to obtain health-related information. Doctors’ health blogs (21%) and social media sites of international healthcare organizations (20%) constitute the most trusted source of health information among respondents, with UAE government health agencies’ social media accounts trusted by 15% of respondents. Cardiovascular diseases, diabetes, and hypertension were the most commonly searched topics on social media (29%), followed by nutrition (20%) and skin care (16%). Majority of respondents (41%) rely on reliability of hits on Google search engines, 22% check for health information only from 'reliable' social media sites, while 8% utilize 'logic' to ascertain reliability of health information. As social media has rapidly become an integral part of the health landscape, it is important that health care policy makers, healthcare providers and social media companies collaborate to promote the positive aspects of social media for young people, whilst mitigating the potential negatives. Utilizing popular social media platforms for posting reader-friendly health information will achieve high coverage. Improving youth digital literacy will facilitate easier access to trustworthy information on the internet.

Keywords: social media, United Arab Emirates, youth engagement, digital literacy

Procedia PDF Downloads 117
286 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 204
285 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education

Authors: Ray Bryant

Abstract:

3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.

Keywords: AI, technology, human trafficking, prevention

Procedia PDF Downloads 56
284 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 3
283 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 97
282 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 67
281 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia

Authors: Tanima Chatterjee, Maitree Bhattacharyya

Abstract:

The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.

Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes

Procedia PDF Downloads 127
280 E-Procurement Adoption and Effective Service Delivery in the Uganda Coffee Industry

Authors: Taus Muganda

Abstract:

This research explores the intricate relationship between e-procurement adoption and effective service delivery in the Uganda Coffee Industry, focusing on the processes involved, key actors, and the impact of digital transformation. The study is guided by three prominent theories, Actor-Network Theory, Resource-Based View Theory, and Institutional Theory to comprehensively explore the dynamics of e-procurement in the context of the coffee sector. The primary aim of this project is to examine the e-procurement adoption process and its role in enhancing service delivery within the Uganda Coffee Industry. The research questions guiding this inquiry are: firstly, whether e-procurement adoption and implementation contribute to achieving quality service delivery; and secondly, how e-procurement adoption can be effectively realized within the Uganda Coffee Industry. To address these questions, the study has laid out specific objectives. Firstly, it seeks to investigate the impact of e-procurement on effective service delivery, analysing how the integration of digital processes influences the overall quality of services provided in the coffee industry. Secondly, it aims to critically analyse the measures required to achieve effective delivery outcomes through the adoption and implementation of e-procurement, assessing the strategies that can maximize the benefits of digital transformation. Furthermore, the research endeavours to identify and examine the key actor’s instrumental in achieving effective service delivery within the Uganda Coffee Industry. By utilizing Actor-Network Theory, the study will elucidate the network of relationships and collaborations among actors involved in the e-procurement process. The research contributes to addressing a critical gap in the sector. Despite coffee being the leading export crop in Uganda, constituting 16% of total exports, there is a recognized need for digital transformation, specifically in the realm of e-procurement, to enhance the productivity of producers and contribute to the economic growth of the country. The study aims to provide insights into transforming the Uganda Coffee Industry by focusing on improving the e-procurement services delivered to actors in the coffee sector. The three forms of e-procurement investigated in this research—E-Sourcing, E-Payment, and E-Invoicing—serve as focal points in understanding the multifaceted dimensions of digital integration within the Uganda Coffee Industry. This research endeavours to offer practical recommendations for policymakers, industry stakeholders, and the UCDA to strategically leverage e-procurement for the benefit of the entire coffee value chain.

Keywords: e-procurement, effective service delivery, actors, actor-network theory, resource-based view theory, institutional theory, e-invocing, e-payment, e-sourcing

Procedia PDF Downloads 67
279 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 79
278 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton

Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla

Abstract:

An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.

Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton

Procedia PDF Downloads 132
277 Cultural and Natural Heritage Conservation by GIS Tourism Inventory System Project

Authors: Gamze Safak, Umut Arslanoglu

Abstract:

Cultural and tourism conservation and development zones and tourism centers are the boundaries declared for the purpose of protecting, using, and evaluating the sectoral development and planned development in areas where historical and cultural values are heavily involved and/or where tourism potential is high. The most rapidly changing regions in Turkey are tourism areas, especially the coastal areas. Planning these regions is not about only an economic gain but also a natural and physical environment and refers to a complex process. If the tourism sector is not well controlled, excessive use of natural resources and wrong location choices may cause damage to natural areas, historical values, and socio-cultural structure. Since the strategic decisions taken in the environmental order and zoning plans, which are the means of guiding the physical environment of the Ministry of Culture and Tourism, which have the authority to make plans in tourism centers, are transformed into plan decisions that find the spatial expression, comprehensive evaluation of all kinds of data, following the historical development and based on the correct and current data is required. In addition, the authority has a number of competences in tourism promotion as well as the authority to plan, leading to the necessity of taking part in the applications requiring complex analysis such as the management and integration of the country's economic, political, social and cultural resources. For this purpose, Tourism Inventory System (TES) project, which consists of a series of subsystems, has been developed in order to solve complex planning and method problems in the management of site-related information. The scope of the project is based on the integration of numerical and verbal data in the regions within the jurisdiction of the authority, and the monitoring of the historical development of urban planning studies, making the spatial data of the institution easily accessible, shared, questionable and traceable in international standards. A dynamic and continuous system design has been put into practice by utilizing the advantage of the use of Geographical Information Systems in the planning process to play a role in making the right decisions, revealing the tools of social, economic, cultural development, and preservation of natural and cultural values. This paper, which is prepared by the project team members in TES (Tourism Inventory System), will present a study regarding the applicability of GIS in cultural and natural heritage conservation.

Keywords: cultural conservation, GIS, geographic information system, tourism inventory system, urban planning

Procedia PDF Downloads 115
276 Land Art in Public Spaces Design: Remediation, Prevention of Environmental Risks and Recycling as a Consequence of the Avant-Garde Activity of Landscape Architecture

Authors: Karolina Porada

Abstract:

Over the last 40 years, there has been a trend in landscape architecture which supporters do not perceive the role of pro-ecological or postmodern solutions in the design of public green spaces as an essential goal, shifting their attention to the 'sculptural' shaping of areas with the use of slopes, hills, embankments, and other forms of terrain. This group of designers can be considered avant-garde, which in its activities refers to land art. Initial research shows that such applications are particularly frequent in places of former post-industrial sites and landfills, utilizing materials such as debris and post-mining waste in their construction. Due to the high degradation of the environment surrounding modern man, the brownfields are a challenge and a field of interest for the representatives of landscape architecture avant-garde, who through their projects try to recover lost lands by means of transformations supported by engineering and ecological knowledge to create places where nature can develop again. The analysis of a dozen or so facilities made it possible to come up with an important conclusion: apart from the cultural aspects (including artistic activities), the green areas formally referring to the land are important in the process of remediation of post-industrial sites and waste recycling (e. g. from construction sites). In these processes, there is also a potential for applying the concept of Natural Based Solutions, i.e. solutions allowing for the natural development of the site in such a way as to use it to cope with environmental problems, such as e.g.  air pollution, soil phytoremediation and climate change. The paper presents examples of modern parks, whose compositions are based on shaping the surface of the terrain in a way referring to the land art, at the same time providing an example of brownfields reuse and application of waste recycling.  For the purposes of object analysis, research methods such as historical-interpretation studies, case studies, qualitative research or the method of logical argumentation were used. The obtained results provide information about the role that landscape architecture can have in the process of remediation of degraded areas, at the same time guaranteeing the benefits, such as the shaping of landscapes attractive in terms of visual appearance, low costs of implementation, and improvement of the natural environment quality.

Keywords: brownfields, contemporary parks, landscape architecture, remediation

Procedia PDF Downloads 147
275 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 186
274 Integration of Technology into Nursing Education: A Collaboration between College of Nursing and University Research Center

Authors: Lori Lioce, Gary Maddux, Norven Goddard, Ishella Fogle, Bernard Schroer

Abstract:

This paper presents the integration of technologies into nursing education. The collaborative effort includes the College of Nursing (CoN) at the University of Alabama in Huntsville (UAH) and the UAH Systems Management and Production Center (SMAP). The faculty at the CoN conducts needs assessments to identify education and training requirements. A team of CoN faculty and SMAP engineers then prioritize these requirements and establish improvement/development teams. The development teams consist of nurses to evaluate the models and to provide feedback and of undergraduate engineering students and their senior staff mentors from SMAP. The SMAP engineering staff develops and creates the physical models using 3D printing, silicone molds and specialized molding mixtures and techniques. The collaboration has focused on developing teaching and training, or clinical, simulators. In addition, the onset of the Covid-19 pandemic has intensified this relationship, as 3D modeling shifted to supplied personal protection equipment (PPE) to local health care providers. A secondary collaboration has been introducing students to clinical benchmarking through the UAH Center for Management and Economic Research. As a result of these successful collaborations the Model Exchange & Development of Nursing & Engineering Technology (MEDNET) has been established. MEDNET seeks to extend and expand the linkage between engineering and nursing to K-12 schools, technical schools and medical facilities in the region to the resources available from the CoN and SMAP. As an example, stereolithography (STL) files of the 3D printed models, along with the specifications to fabricate models, are available on the MEDNET website. Ten 3D printed models have been developed and are currently in use by the CoN. The following additional training simulators are currently under development:1) suture pads, 2) gelatin wound models and 3) printed wound tattoos. Specification sheets have been written for these simulations that describe the use, fabrication procedures and parts list. These specifications are available for viewing and download on MEDNET. Included in this paper are 1) descriptions of CoN, SMAP and MEDNET, 2) collaborative process used in product improvement/development, 3) 3D printed models of training and teaching simulators, 4) training simulators under development with specification sheets, 5) family care practice benchmarking, 6) integrating the simulators into the nursing curriculum, 7) utilizing MEDNET as a pandemic response, and 8) conclusions and lessons learned.

Keywords: 3D printing, nursing education, simulation, trainers

Procedia PDF Downloads 119
273 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 445
272 Access and Utilization of Family Planning Services among Women in a Rural Community of Enugu state Nigeria, using a Descriptive Cross-sectional Design

Authors: Chidiebere Joy Nwankwo, Benjamin S. C. Uzochukwu, Florence T. Sibeudu

Abstract:

Background: Family planning is one of the most cost-effective ways to prevent maternal, infant, and child mortality. It can decrease maternal mortality by reducing the number of unintended pregnancies, the number of abortions, and the proportion of births at high risk. It has been seen to improve the health and economic well-being of families and communities and ensures women’s planned childbearing in order to achieve education and career goals which could raise family income thereby reducing poverty. The choice and use of a particular family planning method and their sources vary globally. Rural Communities often face significant challenges in accessing and utilizing family planning services. Aim: This study set out to assess Access and Utilization of Family Planning Services among Women of Reproductive Age in a Rural Community of Enugu state, Nigeria. Rural communities were chosen for this study because past demographic surveys have shown that women in urban areas are more likely to accept and practice family planning compared to those in rural areas. Method: A Descriptive Cross-sectional Research design was employed to achieve the aim and objectives of the study. Data collected from 177 consenting participants using interviewer-administered questionnaires was analysed using Descriptive statistics to summarize the Socio-demographic characteristics of the participants and Access and Utilization of Family Planning Services among the participants including Reasons for using different Family Planning Methods and Barriers encountered in Access and Utilization of these services. A Cross-tabulation between Socio-demographic Characteristics of respondents and the use of Family Planning services was carried out. Result: The findings of this study revealed that majority of the participants (72.9%) have not utilized any family planning service. Out of those (27.1%) that have used any family planning service, majority of them are still currently using a form of family planning service and have access to them in health facilities, patent medicine vendors and others based on multiple responses. Male condoms were the most utilized modern family planning service. Based on multiple responses, inaccessibility, personal beliefs and partner’s objection were the most identified barriers encountered in accessing family planning services. Conclusion: Access and uptake of family planning services in rural communities is lower than the national average. Increasing access to family planning is an urgent priority for rural areas Interventions that will scale up Access and Utilization of family planning services in rural communities should be intensified.

Keywords: access, family planning, rural community, utilization

Procedia PDF Downloads 39
271 Gender issues in Law and society in India

Authors: Sunil Gaikwad

Abstract:

Gender discrimination is a very prevalent and much used word in the legal parlance. , The more socially, culturally, economically and educationally backward the community, the more gender discrimination is seen there. Gender discrimination is a worldwide Phenomena. In India it was more prevalent, due to illiteracy, bad social and religious customs. in Indian family system male child is considered as inheritor of the family clan, support for parents in their old age and girls as the property of others and unnecessary load on parents and on property as the dowry has to be give at her marriage as also some festivals like Raksha Bandhan and Bhau Teej during Deepawali (wherein having brother is compulsory)insist on having a male child in the family, hence most couples try to give birth only to male child at the cost of female child, hence the female feticide was going on a large scale due to which, sex ratio had considerably decreased creating problem for geeting groom for bride groom thereby putting question mark on family system. To redo the damage done to the society due to the female feticide Government of India has enacted various Laws and introduced various welfare schemes for the upliftment of girl child and also launched countrywide awareness campaign to create awareness among people about the importance of girl child and punitive laws for infanticide which is now bearing fruits but still cases of female feticide are coming fore. There is an urgent need to go to the roots of the problem and to find practicable and effective legal and social measures to overcome this issue, and the purpose of this research paper is the same. The research paper discusses in detail the reasons and superstitions that are responsible for the gender discriminations and comes out with effective measures including necessary and effective changes in the existing Laws, effective awareness campaign against religious superstitions for gender equality. For this research paper doctrinal research methodology is used to drive the research to its logical conclusion, for which various primary and secondary sources literature has been perused and studied. It is worth noting that while working on the paper suggestions and recommendations and conclusions have been drawn where it is suggested and concluded that there is an urgent need to re think about the festivals which encourages gender discriminations, to sensitize and create ample of awareness among people by effectively utilizing Radio, Television, Social Media folk arts, public shows and to make existing laws more effective and strict implementation for the purpose and zero tolerance for female feticide.

Keywords: awareness, effective laws, female foeticide, festivals, superstitions

Procedia PDF Downloads 82
270 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 114
269 In vivo Alterations in Ruminal Parameters by Megasphaera Elsdenii Inoculation on Subacute Ruminal Acidosis (SARA)

Authors: M. S. Alatas, H. D. Umucalilar

Abstract:

SARA is a common and serious metabolic disorder in early lactation in dairy cattle and in finishing beef cattle, caused by diets with high inclusion of cereal grain. This experiment was performed to determine the efficacy of Megasphaera elsdenii, a major lactate-utilizing bacterium in prevention/treatment of SARA in vivo. In vivo experimentation, it was used eight ruminally cannulated rams and it was applied the rapid adaptation with the mixture of grain based on wheat (%80 wheat, %20 barley) and barley (%80 barley, %20 wheat). During the systematic adaptation, it was followed the probability of SARA formation by being measured the rumen pH with two hours intervals after and before feeding. After being evaluated the data, it was determined the ruminal pH ranged from 5,2-5,6 on the condition of feeding with 60 percentage of grain mixture based on barley and wheat, that assured the definite form of subacute acidosis. In four days SARA period, M. elsdenii (1010 cfu ml-1) was inoculated during the first two days. During the SARA period, it was observed the decrease of feed intake with M. elsdenii inoculation. Inoculation of M. elsdenii was caused to differentiation of rumen pH (P < 0,0001), while it was found the pH level approximately 5,55 in animals applied the inoculation, it was 5,63 pH in other animals. It was observed that total VFA with the bacterium inoculation tended to change in terms of grain feed (P < 0,07). It increased with the effect of total VFA inoculation in barley based diet, but it was more stabilized in wheat based diet. Bacterium inoculation increased the ratio of propionic acid (18,33%-21,38%) but it caused to decrease the butyric acid, and acetic/propionic acid. During the rapid adaptation, the concentration of lactic acid in the rumen liquid increased depending upon grain level (P<0,0001). On the other hand bacterium inoculation did not have an effect on concentration of lactic acid. M. elsdenii inoculation did not affect ruminal ammonia concentration. In the group that did not apply inoculation, the level of ruminal ammonia concentration was higher than the others applied inoculation. M. elsdenii inoculation did not changed protozoa count in barley-based diet whereas it decreased in wheat-based diet. In the period of SARA, it was observed that the level of blood glucose, lactate and hematocrit increased greatly after inoculation (P < 0,0001). When it is generally evaluated, it is seen that M. elsdenii inoculation has not a positive impact on rumen parameters. Therefore, to reveal the full impact of the inoculation with different strains, feedstuffs and animal groups, further research is required.

Keywords: In vivo, Subactute ruminal acidosis, Megasphaera elsdenii, Rumen fermentation

Procedia PDF Downloads 640
268 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems

Authors: Donghyun Lee, Cam Nguyen

Abstract:

Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications

Procedia PDF Downloads 157
267 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 57
266 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation: Antimicrobial, Antioxidant, and Physicochemical Investigations

Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E. A. Awad

Abstract:

Numerous attempts are being performed in order to formulate suitable packaging materials for the meat products. However, to the best of our knowledge, the incorporation of the free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for the meats is seldom reported. Therefore, this study aims at the protection of the aqueous crude extract of the hibiscus flowers utilizing the spry drying encapsulation technique. Results of the Fourier transform infrared (FTIR), the scanning electron microscope (SEM), and the particle size analyzer confirmed the successful formation of the assembled capsules via strong interactions, the spherical rough microparticles, and the particle size of ~ 235 nm, respectively. Also, the obtained microcapsules enjoy higher thermal stability than the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration of 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against the microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to the PVA matrix. Application of the prepared films on the real meat samples displayed a low bacterial growth with a slight increase in the pH over the storage time which continued up to 10 days at 4 oC, as further evidence to the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of the prepared composite films pave the way towards combined active and smart food packaging applications. This would play a vital role in the food hygiene, including also the quality control and the assurance.

Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage

Procedia PDF Downloads 86
265 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 109
264 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 190
263 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance

Authors: V. Kumar, K. Jha

Abstract:

The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.

Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model

Procedia PDF Downloads 171