Search results for: conventional neural network
6021 DUSP16 Inhibition Rescues Neurogenic and Cognitive Deficits in Alzheimer's Disease Mice Models
Authors: Huimin Zhao, Xiaoquan Liu, Haochen Liu
Abstract:
The major challenge facing Alzheimer's Disease (AD) drug development is how to effectively improve cognitive function in clinical practice. Growing evidence indicates that stimulating hippocampal neurogenesis is a strategy for restoring cognition in animal models of AD. The mitogen-activated protein kinase (MAPK) pathway is a crucial factor in neurogenesis, which is negatively regulated by Dual-specificity phosphatase 16 (DUSP16). Transcriptome analysis of post-mortem brain tissue revealed up-regulation of DUSP16 expression in AD patients. Additionally, DUSP16 was involved in regulating the proliferation and neural differentiation of neural progenitor cells (NPCs). Nevertheless, whether the effect of DUSP16 on ameliorating cognitive disorders by influencing NPCs differentiation in AD mice remains unclear. Our study demonstrates an association between DUSP16 SNPs and clinical progression in individuals with mild cognitive impairment (MCI). Besides, we found that increased DUSP16 expression in both 3×Tg and SAMP8 models of AD led to NPC differentiation impairments. By silencing DUSP16, cognitive benefits, the induction of AHN and synaptic plasticity, were observed in AD mice. Furthermore, we found that DUSP16 is involved in the process of NPC differentiation by regulating c-Jun N-terminal kinase (JNK) phosphorylation. Moreover, the increased DUSP16 may be regulated by the ETS transcription factor (ELK1), which binds to the promoter region of DUSP16. Loss of ELK1 resulted in decreased DUSP16 mRNA and protein levels. Our data uncover a potential regulatory role for DUSP16 in adult hippocampal neurogenesis and provide a possibility to find the target of AD intervention.Keywords: alzheimer's disease, cognitive function, DUSP16, hippocampal neurogenesis
Procedia PDF Downloads 726020 The Robot Physician's (Rp - 7) Management and Care in Unstable ICU Oncology Patients
Authors: Alisher Agzamov, Hanan Al Harbi
Abstract:
BACKGROUND: The timely assessment and treatment of ICU Surgical and Medical Oncology patients is important for Oncology surgeons and Medical Oncologists and Intensivists. We hypothesized that the use of Robot Physician’s (RP - 7) ICU management and care in ICU can improve ICU physician rapid response to unstable ICU Oncology patients. METHODS: This is a prospective study using a before-after, cohort-control design to test the effectiveness of RP. We have used RP to make multidisciplinary ICU rounds in the ICU and for Emergency cases. Data concerning several aspects of the RP interaction including the latency of the response, the problem being treated, the intervention that was ordered, and the type of information gathered using the RP were documented. The effect of RP on ICU length of stay and cost was assessed. RESULTS: The use of RP was associated with a reduction in latency of attending physician face-to-face response for routine and urgent pages compared to conventional care (RP: 10.2 +/- 3.3 minutes vs conventional: 220 +/- 80 minutes). The response latencies to Oncology Emergency (8.0 +/- 2.8 vs 150 +/- 55 minutes) and for Respiratory Failure (12 +/- 04 vs 110 +/- 45 minutes) were reduced (P < .001), as was the LOS for patients with AML (5 days) and ARDS (10 day). There was an increase in ICU occupancy by 20 % compared with the prerobot era, and there was an ICU cost savings of KD2.5 million attributable to the use of RP. CONCLUSION: The use of RP enabled rapid face-to-face ICU Intensivist - physician response to unstable ICU Oncology patients and resulted in decreased ICU cost and LOS.Keywords: robot physician, oncology patients, rp - 7 in icu management, cost and icu occupancy
Procedia PDF Downloads 826019 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 766018 A Design of the Infrastructure and Computer Network for Distance Education, Online Learning via New Media, E-Learning and Blended Learning
Authors: Sumitra Nuanmeesri
Abstract:
The research focus on study, analyze and design the model of the infrastructure and computer networks for distance education, online learning via new media, e-learning and blended learning. The collected information from study and analyze process that information was evaluated by the index of item objective congruence (IOC) by 9 specialists to design model. The results of evaluate the model with the mean and standard deviation by the sample of 9 specialists value is 3.85. The results showed that the infrastructure and computer networks are designed to be appropriate to a great extent appropriate to a great extent.Keywords: blended learning, new media, infrastructure and computer network, tele-education, online learning
Procedia PDF Downloads 4026017 Molecular Characterization of White Spot Syndrome Virus in Some Cultured Penaeid Shrimps of Coastal Regions in Bangladesh
Authors: Md. Baki Billah, Suraiya Parveen, Shuvra Kanti Dey
Abstract:
Bangladesh is earning a lot of foreign currency by exporting shrimp, but this industry is facing a tremendous problem due to the infection of white spot syndrome virus (WSSV). This study was undermined to develop rapid detection method of WSSV. A total of shrimp samples 240 collected from the 12 shrimp farms of different coastal regions (Satkhira, Khulna, and Bagerhat) were analyzed by conventional PCR using VP28 and VP664 gene-specific primers. In satkhira, Bagerhat and Khulna 39, 41 and 29 samples were found WSSV positive respectively. Real-time PCR using 71-bp amplicon for VP664 gene correlated well with conventional PCR data. The prevalence rates of WSSV among the collected 240 samples were Satkhira 38%, Khulna 47% and Bagerhat 50%. Molecular analysis of the VP28 gene sequences of WSSV revealed that Bangladeshi strains phylogenetically affiliated to the strains belong to India. This work concluded that WSSV infections are widely distributed in the coastal regions cultured shrimp in Bangladesh. Physico-chemical parameters were within the range of fish culture.Keywords: coastal regions of Bangladesh, PCR, shrimp, white spot syndrome virus
Procedia PDF Downloads 1286016 Analysis of Efficacy and Safety of Abatacept for Rheumatoid Arthritis: A Systematic Review and Meta Analysis
Authors: Hamida Memon
Abstract:
Rheumatoid arthritis (RA) is a persistent inflammation of the joints caused by an aggressive immune reaction leading to pain, stiffness, and limited function. Abatacept, a selective co-modulator, is a promising option for treatment and may have better safety profiles compared to other interventions. This meta-analysis aims at assessing the effectiveness and safety of abatacept in contrast to various RA treatments such as placebos, biological DMARDs and conventional DMARDs. The analysis assesses how abatacept influences disease activity, pain intensity and overall patient functionality. It weighs the risk factor of abatacept with other drugs such as tocilizumab, with the numbers being lower for abatacept. This meta-analysis aims at assessing the effectiveness and safety of abatacept in contrast to various RA treatments such as placebos, biological DMARDs and conventional DMARDs. The analysis assesses how abatacept influences disease activity, pain intensity and overall patient functionality. It weighs the risk factor of abatacept with other drugs such as tocilizumab, with the numbers being lower for abatacept.Keywords: Rheumatoid arthritis, abatacept, control group, bone disease
Procedia PDF Downloads 246015 Growing Architecture, Technical Product Harvesting of Near Net Shape Building Components
Authors: Franziska Moser, Martin Trautz, Anna-Lena Beger, Manuel Löwer, Jörg Feldhusen, Jürgen Prell, Alexandra Wormit, Björn Usadel, Christoph Kämpfer, Thomas-Benjamin Seiler, Henner Hollert
Abstract:
The demand for bio-based materials and components in architecture has increased in recent years due to society’s heightened environmental awareness. Nowadays, most components are being developed via a substitution approach, which aims at replacing conventional components with natural alternatives who are then being processed, shaped and manufactured to fit the desired application. This contribution introduces a novel approach to the development of bio-based products that decreases resource consumption and increases recyclability. In this approach, natural organisms like plants or trees are not being used in a processed form, but grow into a near net shape before then being harvested and utilized as building components. By minimizing the conventional production steps, the amount of resources used in manufacturing decreases whereas the recyclability increases. This paper presents the approach of technical product harvesting, explains the theoretical basis as well as the matching process of product requirements and biological properties, and shows first results of the growth manipulation studies.Keywords: design with nature, eco manufacturing, sustainable construction materials, technical product harvesting
Procedia PDF Downloads 5006014 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 486013 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease
Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms
Abstract:
Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury
Procedia PDF Downloads 1416012 How to Enhance Performance of Universities by Implementing Balanced Scorecard with Using FDM and ANP
Authors: Neda Jalaliyoon, Nooh Abu Bakar, Hamed Taherdoost
Abstract:
The present research recommended balanced scorecard (BSC) framework to appraise the performance of the universities. As the original model of balanced scorecard has four perspectives in order to implement BSC in present research the same model with “financial perspective”, “customer”,” internal process” and “learning and growth” is used as well. With applying fuzzy Delphi method (FDM) and questionnaire sixteen measures of performance were identified. Moreover, with using the analytic network process (ANP) the weights of the selected indicators were determined. Results indicated that the most important BSC’s aspect were Internal Process (0.3149), Customer (0.2769), Learning and Growth (0.2049), and Financial (0.2033) respectively. The proposed BSC framework can help universities to enhance their efficiency in competitive environment.Keywords: balanced scorecard, higher education, fuzzy delphi method, analytic network process (ANP)
Procedia PDF Downloads 4266011 Intensifying Approach for Separation of Bio-Butanol Using Ionic Liquid as Green Solvent: Moving Towards Sustainable Biorefinery
Authors: Kailas L. Wasewar
Abstract:
Biobutanol has been considered as a potential and alternative biofuel relative to the most popular biodiesel and bioethanol. End product toxicity is the major problems in commercialization of fermentation based process which can be reduce to some possible extent by removing biobutanol simultaneously. Several techniques have been investigated for removing butanol from fermentation broth such as stripping, adsorption, liquid–liquid extraction, pervaporation, and membrane solvent extraction. Liquid–liquid extraction can be performed with high selectivity and is possible to carry out inside the fermenter. Conventional solvents have few drawbacks including toxicity, loss of solvent, high cost etc. Hence alternative solvents must be explored for the same. Room temperature ionic liquids (RTILs) composed entirely of ions are liquid at room temperature having negligible vapor pressure, non-flammability, and tunable physiochemical properties for a particular application which term them as “designer solvents”. Ionic liquids (ILs) have recently gained much attention as alternatives for organic solvents in many processes. In particular, ILs have been used as alternative solvents for liquid–liquid extraction. Their negligible vapor pressure allows the extracted products to be separated from ILs by conventional low pressure distillation with the potential for saving energy. Morpholinium, imidazolium, ammonium, phosphonium etc. based ionic liquids have been employed for the separation biobutanol. In present chapter, basic concepts of ionic liquids and application in separation have been presented. Further, type of ionic liquids including, conventional, functionalized, polymeric, supported membrane, and other ionic liquids have been explored. Also the effect of various performance parameters on separation of biobutanol by ionic liquids have been discussed and compared for different cation and anion based ionic liquids. The typical methodology for investigation have been adopted such as contacting the equal amount of biobutanol and ionic liquids for a specific time say, 30 minutes to confirm the equilibrium. Further, biobutanol phase were analyzed using GC to know the concentration of biobutanol and material balance were used to find the concentration in ionic liquid.Keywords: biobutanol, separation, ionic liquids, sustainability, biorefinery, waste biomass
Procedia PDF Downloads 916010 Overview of a Quantum Model for Decision Support in a Sensor Network
Authors: Shahram Payandeh
Abstract:
This paper presents an overview of a model which can be used as a part of a decision support system when fusing information from multiple sensing environment. Data fusion has been widely studied in the past few decades and numerous frameworks have been proposed to facilitate decision making process under uncertainties. Multi-sensor data fusion technology plays an increasingly significant role during people tracking and activity recognition. This paper presents an overview of a quantum model as a part of a decision-making process in the context of multi-sensor data fusion. The paper presents basic definitions and relationships associating the decision-making process and quantum model formulation in the presence of uncertainties.Keywords: quantum model, sensor space, sensor network, decision support
Procedia PDF Downloads 2276009 Fault Ride Through Management in Renewable Power Park
Authors: Mohd Zamri Che Wanik
Abstract:
This paper presents the management of the Fault Ride Through event within a Solar Farm during a grid fault. The modeling and simulation of a photovoltaic (PV) with battery energy storage connected to the power network will be described. The modeling approach and the study analysis performed are described. The model and operation scenarios are simulated using a digital simulator for different scenarios. The dynamic response of the system when subjected to sudden self-clearance temporary fault is presented. The capability of the PV system and battery storage riding through the power system fault and, at the same time, supporting the local grid by injecting fault current is demonstrated. For each case, the different control methods to achieve the objective of supporting the grid according to grid code requirements are presented and explained. The inverter modeling approach is presented and described.Keywords: faut ride through, solar farm, grid code, power network
Procedia PDF Downloads 516008 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting
Authors: Kahlil Fredrick Cui, Marissa Pastor
Abstract:
Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.Keywords: complex networks, correlations, earthquake, hazard assessment
Procedia PDF Downloads 2126007 GIS Based Public Transport Accessibility of Lahore using PTALs Model
Authors: Naveed Chughtai, Salman Atif, Azhar Ali Taj, Murtaza Asghar Bukhari
Abstract:
Accessible transport systems play a crucial role in infrastructure management and ease of access to destinations. Thus, the necessity of knowledge of service coverage and service deprived areas is a prerequisite for devising policies. Integration of PTALs model with GIS network analysis models (Service Area Analysis, Closest Facility Analysis) facilitates the analysis of deprived areas. In this research, models presented determine the accessibility. The empirical evidence suggests that current bus network system caters only 18.5% of whole population. Using network analysis results as inputs for PTALs, it is seen that excellent accessibility indexed bands cover a limited areas, while 78.8% of area is totally deprived of any service. To cater the unserved catchment, new route alignments are proposed while keeping in focus the Socio-economic characteristics, land-use type and net population density of the deprived area. Change in accessibility with proposed routes show a 10% increment in service delivery and enhancement in terms of served population is up to 20.4%. PTALs result shows a decrement of 60 Km2 in unserved band. The result of this study can be used for planning, transport infrastructure management, allocation of new route alignments in combination with future land-use development and for adequate spatial distribution of service access points.Keywords: GIS, public transport accessibility, PTALs, accessibility index, service area analysis, closest facility analysis
Procedia PDF Downloads 4386006 Effects of Gamification on Lower Secondary School Students’ Motivation and Engagement
Authors: Goh Yung Hong, Mona Masood
Abstract:
This paper explores the effects of gamification on lower secondary school students’ motivation and engagement in the classroom. Two-group posttest-only experimental design were employed to study the influence of gamification teaching method (GTM) when compared with conventional teaching method (CTM) on 60 lower secondary school students. The Student Engagement Instrument (SEI) and Intrinsic Motivation Inventory (IMI) were used to assess students’ intrinsic motivation and engagement level towards the respective teaching method. Finding indicates that students who completed the GTM lesson were significantly higher in intrinsic motivation to learn than those from the CTM. Although the result were insignificant and only marginal difference in the engagement mean, GTM still show better potential in raising student’s engagement in class when compared with CTM. This finding proves that the GTM is likely to solve the current issue of low motivation to learn and low engagement in class among lower secondary school students in Malaysia. On the other hand, despite being not significant, higher mean indicates that CTM positively contribute to higher peer support for learning and better teacher and student relationship when compared with GTM. As a conclusion, gamification approach is flexible and can be adapted into many learning content to enhance the intrinsic motivation to learn and to some extent, encourage better student engagement in class.Keywords: conventional teaching method, gamification teaching method, motivation, engagement
Procedia PDF Downloads 5266005 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 796004 Technical Analysis of Combined Solar Water Heating Systems for Cold Climate Regions
Authors: Hossein Lotfizadeh, André McDonald, Amit Kumar
Abstract:
Renewable energy resources, which can supplement space and water heating for residential buildings, can have a noticeable impact on natural gas consumption and air pollution. This study considers a technical analysis of a combined solar water heating system with evacuated tube solar collectors for different solar coverage, ranging from 20% to 100% of the total roof area of a typical residential building located in Edmonton, Alberta, Canada. The alternative heating systems were conventional (non-condensing) and condensing tankless water heaters and condensing boilers that were coupled to solar water heating systems. The performance of the alternative heating systems was compared to a traditional heating system, consisting of a conventional boiler, applied to houses of various gross floor areas. A comparison among the annual natural gas consumption, carbon dioxide (CO2) mitigation, and emissions for the various house sizes indicated that the combined solar heating system can reduce the natural gas consumption and CO2 emissions, and increase CO2 mitigation for all the systems that were studied. The results suggest that solar water heating systems are potentially beneficial for residential heating system applications in terms of energy savings and CO2 mitigation.Keywords: CO2 emissions, CO2 mitigation, natural gas consumption, solar water heating system
Procedia PDF Downloads 3246003 Single Cell Sorter Driven by Resonance Vibration of Cell Culture Substrate
Authors: Misa Nakao, Yuta Kurashina, Chikahiro Imashiro, Kenjiro Takemura
Abstract:
The Research Goal: With the growing demand for regenerative medicine, an effective mass cell culture process is required. In a repetitive subculture process for proliferating cells, preparing single cell suspension which does not contain any cell aggregates is highly required because cell aggregates often raise various undesirable phenomena, e.g., apoptosis and decrease of cell proliferation. Since cell aggregates often occur in cell suspension during conventional subculture processes, this study proposes a single cell sorter driven by a resonance vibration of a cell culture substrate. The Method and the Result: The single cell sorter is simply composed of a cell culture substrate and a glass pipe vertically placed against the cell culture substrate with a certain gap corresponding to a cell diameter. The cell culture substrate is made of biocompatible stainless steel with a piezoelectric ceramic disk glued to the bottom side. Applying AC voltage to the piezoelectric ceramic disk, an out-of-plane resonance vibration with a single nodal circle of the cell culture substrate can be excited at 5.5 kHz. By doing so, acoustic radiation force is emitted, and then cell suspension containing only single cells is pumped into the pipe and collected. This single cell sorter is effective to collect single cells selectively in spite of its quite simple structure. We collected C2C12 myoblast cell suspension by the single cell sorter with the vibration amplitude of 12 µmp-p and evaluated the ratio of single cells in number against the entire cells in the suspension. Additionally, we cultured the collected cells for 72 hrs and measured the number of cells after the cultivation in order to evaluate their proliferation. As a control sample, we also collected cell suspension by conventional pipetting, and evaluated the ratio of single cells and the number of cells after the 72-hour cultivation. The ratio of single cells in the cell suspension collected by the single cell sorter was 98.2%. This ratio was 9.6% higher than that collected by conventional pipetting (statistically significant). Moreover, the number of cells cultured for 72 hrs after the collection by the single cell sorter yielded statistically more cells than that collected by pipetting, resulting in a 13.6% increase in proliferated cells. These results suggest that the cell suspension collected by the single cell sorter driven by the resonance vibration hardly contains cell aggregates whose diameter is larger than the gap between the cell culture substrate and the pipe. Consequently, the cell suspension collected by the single cell sorter maintains high cell proliferation. Conclusions: In this study, we developed a single cell sorter capable of sorting and pumping single cells by a resonance vibration of a cell culture substrate. The experimental results show the single cell sorter collects single cell suspension which hardly contains cell aggregates. Furthermore, the collected cells show higher proliferation than that of cells collected by conventional pipetting. This means the resonance vibration of the cell culture substrate can benefit us with the increase in efficiency of mass cell culture process for clinical applications.Keywords: acoustic radiation force, cell proliferation, regenerative medicine, resonance vibration, single cell sorter
Procedia PDF Downloads 2636002 Comparative Study of Essential Oils Extracted from Algerian Citrus fruits Using Microwaves and Hydrodistillation
Authors: Ferhat Mohamed Amine, Boukhatem Mohamed Nadjib, Chemat Farid
Abstract:
Solvent-free-microwave-extraction (SFME) is a combination of microwave heating and distillation, performed at atmospheric pressure without added any solvent or water. Isolation and concentration of volatile compounds are performed by a single stage. SFME extraction of orange essential oil was studied using fresh orange peel from Valencia late cultivar oranges as the raw material. SFME has been compared with a conventional technique, which used a Clevenger apparatus with hydro-distillation (HD). SFME and HD were compared in term of extraction time, yields, chemical composition and quality of the essential oil, efficiency and costs of the process. Extraction of essential oils from orange peels with SFME was better in terms of energy saving, extraction time (30 min versus 3 h), oxygenated fraction (11.7% versus 7.9%), product yield (0.42% versus 0.39%) and product quality. Orange peels treated by SFME and HD were observed by scanning electronic microscopy (SEM). Micrographs provide evidence of more rapid opening of essential oil glands treated by SFME, in contrast to conventional hydro-distillation.Keywords: hydro-distillation, essential oil, microwave, orange peel, solvent free microwave, extraction SFME
Procedia PDF Downloads 4856001 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes
Authors: Sezer Kefeli, Sertaç Arslan
Abstract:
In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters
Procedia PDF Downloads 1736000 Communication in a Heterogeneous Ad Hoc Network
Authors: C. Benjbara, A. Habbani
Abstract:
Wireless networks are getting more and more used in every new technology or feature, especially those without infrastructure (Ad hoc mode) which provide a low cost alternative to the infrastructure mode wireless networks and a great flexibility for application domains such as environmental monitoring, smart cities, precision agriculture, and so on. These application domains present a common characteristic which is the need of coexistence and intercommunication between modules belonging to different types of ad hoc networks like wireless sensor networks, mesh networks, mobile ad hoc networks, vehicular ad hoc networks, etc. This vision to bring to life such heterogeneous networks will make humanity duties easier but its development path is full of challenges. One of these challenges is the communication complexity between its components due to the lack of common or compatible protocols standard. This article proposes a new patented routing protocol based on the OLSR standard in order to resolve the heterogeneous ad hoc networks communication issue. This new protocol is applied on a specific network architecture composed of MANET, VANET, and FANET.Keywords: Ad hoc, heterogeneous, ID-Node, OLSR
Procedia PDF Downloads 2155999 Impact of Legs Geometry on the Efficiency of Thermoelectric Devices
Authors: Angel Fabian Mijangos, Jaime Alvarez Quintana
Abstract:
Key concepts like waste heat recycling or waste heat recovery are the basic ideas in thermoelectricity so as to the design the newest solid state sources of energy for a stable supply of electricity and environmental protection. According to several theoretical predictions; at device level, the geometry and configuration of the thermoelectric legs are crucial in the thermoelectric performance of the thermoelectric modules. Thus, in this work, it has studied the geometry effect of legs on the thermoelectric figure of merit ZT of the device. First, asymmetrical legs are proposed in order to reduce the overall thermal conductance of the device so as to increase the temperature gradient in the legs, as well as by harnessing the Thomson effect, which is generally neglected in conventional symmetrical thermoelectric legs. It has been developed a novel design of a thermoelectric module having asymmetrical legs, and by first time it has been validated experimentally its thermoelectric performance by realizing a proof-of-concept device which shows to have almost twofold the thermoelectric figure of merit as compared to conventional one. Moreover, it has been also varied the length of thermoelectric legs in order to analyze its effect on the thermoelectric performance of the device. Along with this, it has studied the impact of contact resistance in these systems. Experimental results show that device architecture can improve up to twofold the thermoelectric performance of the device.Keywords: asymmetrical legs, heat recovery, heat recycling, thermoelectric module, Thompson effect
Procedia PDF Downloads 2415998 Improving Sustainability of the Apparel Industry with Joining the Forces among the Brand Owners: The Case Study of Digital Textile Printing
Authors: Babak Mohajeri, Elina Ilen, Timo Nyberg
Abstract:
Sustainability has become an important topic in contemporary business. The apparel industry is a good example to assess sustainability in practice. Value chains in the apparel industry are faced with various challenges regarding sustainability issues. Apparel companies pay higher attention to economic sustainability issues, and environmental and social sustainability issues of the apparel industry are often underrated. In this paper, we analyze the role of the different players in the value chain of the apparel industry in terms of sustainability. We realize that the brand owners have the highest impact on improving the sustainability of the apparel industry. We design a collaborative business model to join the forces among the brand owners for improving the sustainability of the apparel industry throughout the value chain. We have conducted a case study of shifting from conventional screen-printing to more environmentally sustainable digital textile printing. We suggest that this shift can be accelerated if the brand owners join their forces together to shift from conventional printing to digital printing technology in the apparel industry. Based on the proposed business model, we suggest future directions for using joining the forces among the brand owners for case of sustainabilityKeywords: sustainability, digital textile printing , joining forces, apparel industry
Procedia PDF Downloads 4215997 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 415996 The Efficacy of Psychological Interventions for Psychosis: A Systematic Review and Network Meta-Analysis
Authors: Radu Soflau, Lia-Ecaterina Oltean
Abstract:
Background: Increasing evidence supports the efficacy of psychological interventions for psychosis. However, it is unclear which one of these interventions is most likely to address negative psychotic symptoms and related outcomes. We aimed to determine the relative efficacy of psychological and psychosocial interventions for negative symptoms, overall psychotic symptoms, and related outcomes. Methods: To attain this goal, we conducted a systematic review and network meta-analysis. We searched for potentially eligible trials in PubMed, EMBASE, PsycInfo, Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases up until February 08, 2022. We included randomized controlled trials that investigated the efficacy of psychological for adults with psychosis. We excluded interventions for prodromal or “at risk” individuals, as well as patients with serious co-morbid medical or psychiatric conditions (others than depressive and/or anxiety disorders). Two researchers conducted study selection and performed data extraction independently. Analyses were run using STATA network and mvmeta packages, applying a random effect model under a frequentist framework in order to compute standardized mean differences or risk ratio. Findings: We identified 47844 records and screened 29466 records for eligibility. The majority of eligible interventions were delivered in addition to pharmacological treatment. Treatment as usual (TAU) was the most frequent common comparator. Theoretically driven psychological interventions generally outperformed TAU at post-test and follow-up, displaying small and small-to-medium effect sizes. A similar pattern of results emerged in sensitivity analyses focused on studies that employed an inclusion criterion for relevant negative symptom severity. Conclusion: While the efficacy of some psychological interventions is promising, there is a need for more high-quality studies, as well as more trials directly comparing psychological treatments for negative psychotic symptoms.Keywords: psychosis, network meta-analysis, psychological interventions, efficacy, negative symptoms
Procedia PDF Downloads 1035995 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate
Authors: Susan Diamond
Abstract:
Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare.Keywords: deep learning, machine learning, cognitive computing, model training
Procedia PDF Downloads 2095994 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation
Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta
Abstract:
Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal
Procedia PDF Downloads 3225993 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis
Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger
Abstract:
In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis
Procedia PDF Downloads 3325992 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems
Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu
Abstract:
Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.Keywords: agent communication, introspective agent, isolation of agent, policy enforcement system
Procedia PDF Downloads 297