Search results for: artificial magnetic conduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3784

Search results for: artificial magnetic conduction

154 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 127
153 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 153
152 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 161
151 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 69
150 Optimizing Weight Loss with AI (GenAISᵀᴹ): A Randomized Trial of Dietary Supplement Prescriptions in Obese Patients

Authors: Evgeny Pokushalov, Andrey Ponomarenko, John Smith, Michael Johnson, Claire Garcia, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Sevda Bayramova, Richard Miller

Abstract:

Background: Obesity is a complex, multifactorial chronic disease that poses significant health risks. Recent advancements in artificial intelligence (AI) offer the potential for more personalized and effective dietary supplement (DS) regimens to promote weight loss. This study aimed to evaluate the efficacy of AI-guided DS prescriptions compared to standard physician-guided DS prescriptions in obese patients. Methods: This randomized, parallel-group pilot study enrolled 60 individuals aged 40 to 60 years with a body mass index (BMI) of 25 or greater. Participants were randomized to receive either AI-guided DS prescriptions (n = 30) or physician-guided DS prescriptions (n = 30) for 180 days. The primary endpoints were the percentage change in body weight and the proportion of participants achieving a ≥5% weight reduction. Secondary endpoints included changes in BMI, fat mass, visceral fat rating, systolic and diastolic blood pressure, lipid profiles, fasting plasma glucose, hsCRP levels, and postprandial appetite ratings. Adverse events were monitored throughout the study. Results: Both groups were well balanced in terms of baseline characteristics. Significant weight loss was observed in the AI-guided group, with a mean reduction of -12.3% (95% CI: -13.1 to -11.5%) compared to -7.2% (95% CI: -8.1 to -6.3%) in the physician-guided group, resulting in a treatment difference of -5.1% (95% CI: -6.4 to -3.8%; p < 0.01). At day 180, 84.7% of the AI-guided group achieved a weight reduction of ≥5%, compared to 54.5% in the physician-guided group (Odds Ratio: 4.3; 95% CI: 3.1 to 5.9; p < 0.01). Significant improvements were also observed in BMI, fat mass, and visceral fat rating in the AI-guided group (p < 0.01 for all). Postprandial appetite suppression was greater in the AI-guided group, with significant reductions in hunger and prospective food consumption, and increases in fullness and satiety (p < 0.01 for all). Adverse events were generally mild-to-moderate, with higher incidences of gastrointestinal symptoms in the AI-guided group, but these were manageable and did not impact adherence. Conclusion: The AI-guided dietary supplement regimen was more effective in promoting weight loss, improving body composition, and suppressing appetite compared to the physician-guided regimen. These findings suggest that AI-guided, personalized supplement prescriptions could offer a more effective approach to managing obesity. Further research with larger sample sizes is warranted to confirm these results and optimize AI-based interventions for weight loss.

Keywords: obesity, AI-guided, dietary supplements, weight loss, personalized medicine, metabolic health, appetite suppression

Procedia PDF Downloads 4
149 The Impact of Artificial Intelligence on Food Industry

Authors: George Hanna Abdelmelek Henien

Abstract:

Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.

Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos

Procedia PDF Downloads 60
148 Cardiac Biosignal and Adaptation in Confined Nuclear Submarine Patrol

Authors: B. Lefranc, C. Aufauvre-Poupon, C. Martin-Krumm, M. Trousselard

Abstract:

Isolated and confined environments (ICE) present several challenges which may adversely affect human’s psychology and physiology. Submariners in Sub-Surface Ballistic Nuclear (SSBN) mission exposed to these environmental constraints must be able to perform complex tasks as part of their normal duties, as well as during crisis periods when emergency actions are required or imminent. The operational and environmental constraints they face contribute to challenge human adaptability. The impact of such a constrained environment has yet to be explored. Establishing a knowledge framework is a determining factor, particularly in view of the next long space travels. Ensuring that the crews are maintained in optimal operational conditions is a real challenge because the success of the mission depends on them. This study focused on the evaluation of the impact of stress on mental health and sensory degradation of submariners during a mission on SSBN using cardiac biosignal (heart rate variability, HRV) clustering. This is a pragmatic exploratory study of a prospective cohort included 19 submariner volunteers. HRV was recorded at baseline to classify by clustering the submariners according to their stress level based on parasympathetic (Pa) activity. Impacts of high Pa (HPa) versus low Pa (LPa) level at baseline were assessed on emotional state and sensory perception (interoception and exteroception) as a cardiac biosignal during the patrol and at a recovery time one month after. Whatever the time, no significant difference was found in mental health between groups. There are significant differences in the interoceptive, exteroceptive and physiological functioning during the patrol and at recovery time. To sum up, compared to the LPa group, the HPa maintains a higher level in psychosensory functioning during the patrol and at recovery but exhibits a decrease in Pa level. The HPa group has less adaptable HRV characteristics, less unpredictability and flexibility of cardiac biosignals while the LPa group increases them during the patrol and at recovery time. This dissociation between psychosensory and physiological adaptation suggests two treatment modalities for ICE environments. To our best knowledge, our results are the first to highlight the impact of physiological differences in the HRV profile on the adaptability of submariners. Further studies are needed to evaluate the negative emotional and cognitive effects of ICEs based on the cardiac profile. Artificial intelligence offers a promising future for maintaining high level of operational conditions. These future perspectives will not only allow submariners to be better prepared, but also to design feasible countermeasures that will help support analog environments that bring us closer to a trip to Mars.

Keywords: adaptation, exteroception, HRV, ICE, interoception, SSBN

Procedia PDF Downloads 182
147 Insecticidal Activity of Bacillus Thuringiensis Strain AH-2 Against Hemiptera Insects Pests: Aphis. Gossypii, and Lepidoptera Insect Pests: Plutella Xylostella and Hyphantria Cunea

Authors: Ajuna B. Henry

Abstract:

In recent decades, climate change has demanded biological pesticides; more Bt strains are being discovered worldwide, some containing novel insecticidal genes while others have been modified through molecular approaches for increased yield, toxicity, and wider host target. In this study, B. thuringiensis strain AH-2 (Bt-2) was isolated from the soil and tested for insecticidal activity against Aphis gossypii (Hemiptera: Aphididae) and Lepidoptera insect pests: fall webworm (Hyphantria cunea) and diamondback moth (Plutella xylostella). A commercial strain B. thuringiensis subsp. kurstaki (Btk), and a chemical pesticide, imidacloprid (for Hemiptera) and chlorantraniliprole (for Lepidoptera), were used as positive control and the same media (without bacterial inoculum) as a negative control. For aphidicidal activity, Bt-2 caused a mortality rate of 70.2%, 78.1% or 88.4% in third instar nymphs of A. gossypii (3N) at 10%, 25% or 50% culture concentrations, respectively. Moreover, Bt-2 was effectively produced in cost-effective (PB) supplemented with either glucose (PBG) or sucrose (PBS) and maintained high aphicidal efficacy with 3N mortality rates of 85.9%, 82.9% or 82.2% in TSB, PBG or PBS media, respectively at 50% culture concentration. Bt-2 also suppressed adult fecundity by 98.3% compared to only 65.8% suppression by Btk at similar concentrations but was slightly lower than chemical treatment, which caused 100% suppression. Partial purification of 60 – 80% (NH4)2SO4 fraction of Bt-2 aphicidal proteins purified on anion exchange (DEAE-FF) column revealed a 105 kDa aphicidal protein with LC50 = 55.0 ng/µℓ. For Lepidoptera pests, chemical pesticide, Bt-2, and Btk cultures, mortality of 86.7%, 60%, and 60% in 3rd instar larvae of P. xylostella, and 96.7%, 80.0%, and 93.3% in 6th instar larvae of H. cunea, after 72h of exposure. When the entomopathogenic strains were cultured in a cost-effective PBG or PBS, the insecticidal activity in all strains was not significantly different compared to the use of a commercial medium (TSB). Bt-2 caused a mortality rate of 60.0%, 63.3%, and 50.0% against P. xylostella larvae and 76.7%, 83.3%, and 73.3% against H. cunea when grown in TSB, PBG, and PBS media, respectively. Bt-2 (grown in cost-effective PBG medium) caused a dose-dependent toxicity of 26.7%, 40.0%, and 63.3% against P. xylostella and 46.7%, 53.3%, and 76.7% against H. cunea at 10%, 25% and 50% culture concentration, respectively. The partially purified Bt-2 insecticidal proteins fractions F1, F2, F3, and F4 (extracted at different ratios of organic solvent) caused low toxicity (50.0%, 40.0%, 36.7%, and 30.0%) against P. xylostella and relatively high toxicity (56.7%, 76.7%, 66.7%, and 63.3%) against H. cunea at 100 µg/g of artificial diets. SDS-PAGE analysis revealed that a128kDa protein is associated with toxicity of Bt-2. Our result demonstrates a medium and strong larvicidal activity of Bt-2 against P. xylostella and H. cunea, respectively. Moreover, Bt-2 could be potentially produced using a cost-effective PBG medium which makes it an effective alternative biocontrol strategy to reduce chemical pesticide application.

Keywords: biocontrol, insect pests, larvae/nymph mortality, cost-effective media, aphis gossypii, plutella xylostella, hyphantria cunea, bacillus thuringiensi

Procedia PDF Downloads 18
146 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 123
145 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 119
144 A Case of Prosthetic Vascular-Graft Infection Due to Mycobacterium fortuitum

Authors: Takaaki Nemoto

Abstract:

Case presentation: A 69-year-old Japanese man presented with a low-grade fever and fatigue that had persisted for one month. The patient had an aortic dissection on the aortic arch 13 years prior, an abdominal aortic aneurysm seven years prior, and an aortic dissection on the distal aortic arch one year prior, which were all treated with artificial blood-vessel replacement surgery. Laboratory tests revealed an inflammatory response (CRP 7.61 mg/dl), high serum creatinine (Cr 1.4 mg/dL), and elevated transaminase (AST 47 IU/L, ALT 45 IU/L). The patient was admitted to our hospital on suspicion of prosthetic vascular graft infection. Following further workups on the inflammatory response, an enhanced chest computed tomography (CT) and a non-enhanced chest DWI (MRI) were performed. The patient was diagnosed with a pulmonary fistula and a prosthetic vascular graft infection on the distal aortic arch. After admission, the patient was administered Ceftriaxion and Vancomycine for 10 days, but his fever and inflammatory response did not improve. On day 13 of hospitalization, a lung fistula repair surgery and an omental filling operation were performed, and Meropenem and Vancomycine were administered. The fever and inflammatory response continued, and therefore we took repeated blood cultures. M. fortuitum was detected in a blood culture on day 16 of hospitalization. As a result, we changed the treatment regimen to Amikacin (400 mg/day), Meropenem (2 g/day), and Cefmetazole (4 g/day), and the fever and inflammatory response began to decrease gradually. We performed a test of sensitivity for Mycobacterium fortuitum, and found that the MIC was low for fluoroquinolone antibacterial agent. The clinical course was good, and the patient was discharged after a total of 8 weeks of intravenous drug administration. At discharge, we changed the treatment regimen to Levofloxacin (500 mg/day) and Clarithromycin (800 mg/day), and prescribed these two drugs as a long life suppressive therapy. Discussion: There are few cases of prosthetic vascular graft infection caused by mycobacteria, and a standard therapy remains to be established. For prosthetic vascular graft infections, it is ideal to provide surgical and medical treatment in parallel, but in this case, surgical treatment was difficult and, therefore, a conservative treatment was chosen. We attempted to increase the treatment success rate of this refractory disease by conducting a susceptibility test for mycobacteria and treating with different combinations of antimicrobial agents, which was ultimately effective. With our treatment approach, a good clinical course was obtained and continues at the present stage. Conclusion: Although prosthetic vascular graft infection resulting from mycobacteria is a refractory infectious disease, it may be curative to administer appropriate antibiotics based on the susceptibility test in addition to surgical treatment.

Keywords: prosthetic vascular graft infection, lung fistula, Mycobacterium fortuitum, conservative treatment

Procedia PDF Downloads 154
143 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia

Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi

Abstract:

Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.

Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation

Procedia PDF Downloads 89
142 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing

Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang

Abstract:

Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.

Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture

Procedia PDF Downloads 139
141 Assessment of Current and Future Opportunities of Chemical and Biological Surveillance of Wastewater for Human Health

Authors: Adam Gushgari

Abstract:

The SARS-CoV-2 pandemic has catalyzed the rapid adoption of wastewater-based epidemiology (WBE) methodologies both domestically and internationally. To support the rapid scale-up of pandemic-response wastewater surveillance systems, multiple federal agencies (i.e. US CDC), non-government organizations (i.e. Water Environment Federation), and private charities (i.e. Bill and Melinda Gates Foundation) have funded over $220 million USD supporting development and expanding equitable access of surveillance methods. Funds were primarily distributed directly to municipalities under the CARES Act (90.6%), followed by academic projects (7.6%), and initiatives developed by private companies (1.8%). In addition to federal funding for wastewater monitoring primarily conducted at wastewater treatment plants, state/local governments and private companies have leveraged wastewater sampling to obtain health and lifestyle data on student, prison inmate, and employee populations. We explore the viable paths for expansion of the WBE m1ethodology across a variety of analytical methods; the development of WBE-specific samplers and real-time wastewater sensors; and their application to various governments and private sector industries. Considerable investment in, and public acceptance of WBE suggests the methodology will be applied to other future notifiable diseases and health risks. Early research suggests that WBE methods can be applied to a host of additional “biological insults” including communicable diseases and pathogens, such as influenza, Cryptosporidium, Giardia, mycotoxin exposure, hepatitis, dengue, West Nile, Zika, and yellow fever. Interest in chemical insults is also likely, providing community health and lifestyle data on narcotics consumption, use of pharmaceutical and personal care products (PPCP), PFAS and hazardous chemical exposure, and microplastic exposure. Successful application of WBE to monitor analytes correlated with carcinogen exposure, community stress prevalence, and dietary indicators has also been shown. Additionally, technology developments of in situ wastewater sensors, WBE-specific wastewater samplers, and integration of artificial intelligence will drastically change the landscape of WBE through the development of “smart sewer” networks. The rapid expansion of the WBE field is creating significant business opportunities for professionals across the scientific, engineering, and technology industries ultimately focused on community health improvement.

Keywords: wastewater surveillance, wastewater-based epidemiology, smart cities, public health, pandemic management, substance abuse

Procedia PDF Downloads 108
140 The Gut Microbiome in Cirrhosis and Hepatocellular Carcinoma: Characterization of Disease-Related Microbial Signature and the Possible Impact of Life Style and Nutrition

Authors: Lena Lapidot, Amir Amnon, Rita Nosenko, Veitsman Ella, Cohen-Ezra Oranit, Davidov Yana, Segev Shlomo, Koren Omry, Safran Michal, Ben-Ari Ziv

Abstract:

Introduction: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer related mortality worldwide. Liver Cirrhosis is the main predisposing risk factor for the development of HCC. The factor(s) influencing disease progression from Cirrhosis to HCC remain unknown. Gut microbiota has recently emerged as a major player in different liver diseases, however its association with HCC is still a mystery. Moreover, there might be an important association between the gut microbiota, nutrition, life style and the progression of Cirrhosis and HCC. The aim of our study was to characterize the gut microbial signature in association with life style and nutrition of patients with Cirrhosis, HCC-Cirrhosis and healthy controls. Design: Stool samples were collected from 95 individuals (30 patients with HCC, 38 patients with Cirrhosis and 27 age, gender and BMI-matched healthy volunteers). All participants answered lifestyle and Food Frequency Questionnaires. 16S rRNA sequencing of fecal DNA was performed (MiSeq Illumina). Results: There was a significant decrease in alpha diversity in patients with Cirrhosis (qvalue=0.033) and in patients with HCC-Cirrhosis (qvalue=0.032) compared to healthy controls. The microbiota of patients with HCC-cirrhosis compared to patients with Cirrhosis, was characterized by a significant overrepresentation of Clostridium (pvalue=0.024) and CF231 (pvalue=0.010) and lower expression of Alphaproteobacteria (pvalue=0.039) and Verrucomicrobia (pvalue=0.036) in several taxonomic levels: Verrucomicrobiae, Verrucomicrobiales, Verrucomicrobiaceae and the genus Akkermansia (pvalue=0.039). Furthermore, we performed an analysis of predicted metabolic pathways (Kegg level 2) that resulted in a significant decrease in the diversity of metabolic pathways in patients with HCC-Cirrhosis (qvalue=0.015) compared to controls, one of which was amino acid metabolism. Furthermore, investigating the life style and nutrition habits of patients with HCC-Cirrhosis, we found significant correlations between intake of artificial sweeteners and Verrucomicrobia (qvalue=0.12), High sugar intake and Synergistetes (qvalue=0.021) and High BMI and the pathogen Campylobacter (qvalue=0.066). Furthermore, overweight in patients with HCC-Cirrhosis modified bacterial diversity (qvalue=0.023) and composition (qvalue=0.033). Conclusions: To the best of the our knowledge, we present the first report of the gut microbial composition in patients with HCC-Cirrhosis, compared with Cirrhotic patients and healthy controls. We have demonstrated in our study that there are significant differences in the gut microbiome of patients with HCC-cirrhosis compared to Cirrhotic patients and healthy controls. Our findings are even more pronounced because the significantly increased bacteria Clostridium and CF231 in HCC-Cirrhosis weren't influenced by diet and lifestyle, implying this change is due to the development of HCC. Further studies are needed to confirm these findings and assess causality.

Keywords: Cirrhosis, Hepatocellular carcinoma, life style, liver disease, microbiome, nutrition

Procedia PDF Downloads 127
139 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 16
138 Investigation of Xanthomonas euvesicatoria on Seed Germination and Seed to Seedling Transmission in Tomato

Authors: H. Mayton, X. Yan, A. G. Taylor

Abstract:

Infested tomato seeds were used to investigate the influence of Xanthomonas euvesicatoria on germination and seed to seedling transmission in a controlled environment and greenhouse assays in an effort to develop effective seed treatments and characterize seed borne transmission of bacterial leaf spot of tomato. Bacterial leaf spot of tomato, caused by four distinct Xanthomonas species, X. euvesicatoria, X. gardneri, X. perforans, and X. vesicatoria, is a serious disease worldwide. In the United States, disease prevention is expensive for commercial growers in warm, humid regions of the country, and crop losses can be devastating. In this study, four different infested tomato seed lots were extracted from tomato fruits infected with bacterial leaf spot from a field in New York State in 2017 that had been inoculated with X. euvesicatoria. In addition, vacuum infiltration at 61 kilopascals for 1, 5, 10, and 15 minutes and seed soaking for 5, 10, 15, and 30 minutes with different bacterial concentrations were used to artificially infest seed in the laboratory. For controlled environment assays, infested tomato seeds from the field and laboratory were placed othe n moistened blue blotter in square plastic boxes (10 cm x 10 cm) and incubated at 20/30 ˚C with an 8/16 hour light cycle, respectively. Infested tomato seeds from the field and laboratory were also planted in small plastic trays in soil (peat-lite medium) and placed in the greenhouse with 24/18 ˚C day and night temperatures, respectively, with a 14-hour photoperiod. Seed germination was assessed after eight days in the laboratory and 14 days in the greenhouse. Polymerase chain reaction (PCR) using the hrpB7 primers (RST65 [5’- GTCGTCGTTACGGCAAGGTGGTG-3’] and RST69 [5’-TCGCCCAGCGTCATCAGGCCATC-3’]) was performed to confirm presence or absence of the bacterial pathogen in seed lots collected from the field and in germinating seedlings in all experiments. For infested seed lots from the field, germination was lowest (84%) in the seed lot with the highest level of bacterial infestation (55%) and ranged from 84-98%. No adverse effect on germination was observed from artificially infested seeds for any bacterial concentration and method of infiltration when compared to a non-infested control. Germination in laboratory assays for artificially infested seeds ranged from 82-100%. In controlled environment assays, 2.5 % were PCR positive for the pathogen, and in the greenhouse assays, no infected seedlings were detected. From these experiments, X. euvesicatoria does not appear to adversely influence germination. The lowest rate of germination from field collected seed may be due to contamination with multiple pathogens and saprophytic organisms as no effect of artificial bacterial seed infestation in the laboratory on germination was observed. No evidence of systemic movement from seed to seedling was observed in the greenhouse assays; however, in the controlled environment assays, some seedlings were PCR positive. Additional experiments are underway with green fluorescent protein-expressing isolates to further characterize seed to seedling transmission of the bacterial leaf spot pathogen in tomato.

Keywords: bacterial leaf spot, seed germination, tomato, Xanthomonas euvesicatoria

Procedia PDF Downloads 133
137 Audit and Assurance Program for AI-Based Technologies

Authors: Beatrice Arthur

Abstract:

The rapid development of artificial intelligence (AI) has transformed various industries, enabling faster and more accurate decision-making processes. However, with these advancements come increased risks, including data privacy issues, systemic biases, and challenges related to transparency and accountability. As AI technologies become more integrated into business processes, there is a growing need for comprehensive auditing and assurance frameworks to manage these risks and ensure ethical use. This paper provides a literature review on AI auditing and assurance programs, highlighting the importance of adapting traditional audit methodologies to the complexities of AI-driven systems. Objective: The objective of this review is to explore current AI audit practices and their role in mitigating risks, ensuring accountability, and fostering trust in AI systems. The study aims to provide a structured framework for developing audit programs tailored to AI technologies while also investigating how AI impacts governance, risk management, and regulatory compliance in various sectors. Methodology: This research synthesizes findings from academic publications and industry reports from 2014 to 2024, focusing on the intersection of AI technologies and IT assurance practices. The study employs a qualitative review of existing audit methodologies and frameworks, particularly the COBIT 2019 framework, to understand how audit processes can be aligned with AI governance and compliance standards. The review also considers real-time auditing as an emerging necessity for influencing AI system design during early development stages. Outcomes: Preliminary findings indicate that while AI auditing is still in its infancy, it is rapidly gaining traction as both a risk management strategy and a potential driver of business innovation. Auditors are increasingly being called upon to develop controls that address the ethical and operational risks posed by AI systems. The study highlights the need for continuous monitoring and adaptable audit techniques to handle the dynamic nature of AI technologies. Future Directions: Future research will explore the development of AI-specific audit tools and real-time auditing capabilities that can keep pace with evolving technologies. There is also a need for cross-industry collaboration to establish universal standards for AI auditing, particularly in high-risk sectors like healthcare and finance. Further work will involve engaging with industry practitioners and policymakers to refine the proposed governance and audit frameworks. Funding/Support Acknowledgements: This research is supported by the Information Systems Assurance Management Program at Concordia University of Edmonton.

Keywords: AI auditing, assurance, risk management, governance, COBIT 2019, transparency, accountability, machine learning, compliance

Procedia PDF Downloads 22
136 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 70
135 Semantic Differential Technique as a Kansei Engineering Tool to Enquire Public Space Design Requirements: The Case of Parks in Tehran

Authors: Nasser Koleini Mamaghani, Sara Mostowfi

Abstract:

The complexity of public space design makes it difficult for designers to simultaneously consider all issues for thorough decision-making. Among public spaces, the public space around people’s house is the most prominent space that affects and impacts people’s daily life. Considering recreational public spaces in cities, their main purpose would be to design for experiences that enable a deep feeling of peace and a moment of being away from the hectic daily life. Respecting human emotions and restoring natural environments, although difficult and to some extent out of reach, are key issues for designing such spaces. In this paper we propose to analyse the structure of recreational public spaces and the related emotional impressions. Furthermore, we suggest investigating how these structures influence people’s choice for public spaces by using differential semantics. According to Kansei methodology, in order to evaluate a situation appropriately, the assessment variables must be adapted to the user’s mental scheme. This means that the first step would have to be the identification of a space’s conceptual scheme. In our case study, 32 Kansei words and 4 different locations, each with a different sensual experience, were selected. The 4 locations were all parks in the city of Tehran (Iran), each with a unique structure and artifacts such as a fountain, lighting, sculptures, and music. It should be noted that each of these parks has different combination and structure of environmental and artificial elements like: fountain, lightning, sculpture, music (sound) and so forth. The first one was park No.1, a park with natural environment, the selected space was a fountain with motion light and sculpture. The second park was park No.2, in which there are different styles of park construction: ways from different countries, the selected space was traditional Iranian architecture with a fountain and trees. The third one was park No.3, the park with modern environment and spaces, and included a fountain that moved according to music and lighting. The fourth park was park No.4, the park with combination of four elements: water, fire, earth, wind, the selected space was fountains squirting water from the ground up. 80 participant (55 males and 25 females) aged from 20-60 years participated in this experiment. Each person filled the questionnaire in the park he/she was in. Five-point semantic differential scale was considered to determine the relation between space details and adjectives (kansei words). Received data were analyzed by multivariate statistical technique (factor analysis using SPSS statics). Finally the results of this analysis are criteria as inspiration which can be used in future space designing for creating pleasant feeling in users.

Keywords: environmental design, differential semantics, Kansei engineering, subjective preferences, space

Procedia PDF Downloads 407
134 Investigation of Permeate Flux through DCMD Module by Inserting S-Ribs Carbon-Fiber Promoters with Ascending and Descending Hydraulic Diameters

Authors: Chii-Dong Ho, Jian-Har Chen

Abstract:

The decline in permeate flux across membrane modules is attributed to the increase in temperature polarization resistance in flat-plate Direct Contact Membrane Distillation (DCMD) modules for pure water productivity. Researchers have discovered that this effect can be diminished by embedding turbulence promoters, which augment turbulence intensity at the cost of increased power consumption, thereby improving vapor permeate flux. The device performance of DCMD modules for permeate flux was further enhanced by shrinking the hydraulic diameters of inserted S-ribs carbon-fiber promoters as well as considering the energy consumption increment. The mass-balance formulation, based on the resistance-in-series model by energy conservation in one-dimensional governing equations, was developed theoretically and conducted experimentally on a flat-plate polytetrafluoroethylene/polypropylene (PTFE/PP) membrane module to predict permeate flux and temperature distributions. The ratio of permeate flux enhancement to energy consumption increment, as referred to an assessment on economic viewpoint and technical feasibilities, was calculated to determine the suitable design parameters for DCMD operations with the insertion of S-ribs carbon-fiber turbulence promoters. An economic analysis was also performed, weighing both permeate flux improvement and energy consumption increment on modules with promoter-filled channels by different array configurations and various hydraulic diameters of turbulence promoters. Results showed that the ratio of permeate flux improvement to energy consumption increment in descending hydraulic-diameter modules is higher than in uniform hydraulic-diameter modules. The fabrication details of the DCMD module filaments implementing the S-ribs carbon-fiber filaments and the schematic configuration of the flat-plate DCMD experimental setup with presenting acrylic plates as external walls were demonstrated in the present study. The S-ribs carbon fibers perform as turbulence promoters incorporated into the artificial hot saline feed stream, which was prepared by adding inorganic salts (NaCl) to distilled water. Theoretical predictions and experimental results exhibited a great accomplishment to considerably achieve permeate flux enhancement, such as the new design of the DCMD module with inserting S-ribs carbon-fiber promoters. Additionally, the Nusselt number for the water vapor transferring membrane module with inserted S-ribs carbon-fiber promoters was generalized into a simplified expression to predict the heat transfer coefficient and permeate flux as well.

Keywords: permeate flux, Nusselt number, DCMD module, temperature polarization, hydraulic diameters

Procedia PDF Downloads 7
133 Toxicity Evaluation of Reduced Graphene Oxide on First Larval Stages of Artemia sp.

Authors: Roberta Pecoraro

Abstract:

The focus of this work was to investigate the potential toxic effect of titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites on nauplii of microcrustacean Artemia sp. In order to assess the nanocomposite’s toxicity, a short-term test was performed by exposing nauplii to solutions containing TiO₂-rGO. To prepare titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites, a green procedure based on solar photoreduction was proposed; it allows to obtain the photocatalysts by exploiting the photocatalytic properties of titania activated by the solar irradiation in order to avoid the high temperatures and pressures required for the standard hydrothermal synthesis. Powders of TiO₂-rGO supplied by the Department of Chemical Sciences (University of Catania) are indicated as TiO₂-rGO at 1% and TiO₂-rGO at 2%. Starting from a stock solution (1mg rGO-TiO₂/10 ml ASPM water) of each type, we tested four different concentrations (serial dilutions ranging from 10⁻¹ to 10⁻⁴ mg/ml). All the solutions have been sonicated for 12 min prior to use. Artificial seawater (called ASPM water) was prepared to guarantee the hatching of the cysts and to maintain nauplii; the durable cysts used in this study, marketed by JBL (JBL GmbH & Co. KG, Germany), were hydrated with ASPM water to obtain nauplii (instar II-III larvae). The hatching of the cysts was carried out in the laboratory by immersing them in ASPM water inside a 500 ml beaker and keeping them constantly oxygenated thanks to an aerator for the insufflation of microbubble air: after 24-48 hours, the cysts hatched, and the nauplii appeared. The nauplii in the second and third stages of development were collected one-to-one, using stereomicroscopes, and transferred into 96-well microplates where one nauplius per well was added. The wells quickly have been filled with 300 µl of each specific concentration of the solution used, and control samples were incubated only with ASPM water. Replication was performed for each concentration. Finally, the microplates were placed on an orbital shaker, and the tests were read after 24 and 48 hours from inoculating the solutions to assess the endpoint (immobility/death) for the larvae. Nauplii that appeared motionless were counted as dead, and the percentages of mortality were calculated for each treatment. The results showed a low percentage of immobilization both for TiO₂-rGO at 1% and TiO₂-rGO at 2% for all concentrations tested: for TiO₂-rGO at 1% was below 12% after 24h and below 15% after 48h; for TiO₂-rGO at 2% was below 8% after 24h and below 12% after 48h. According to other studies in the literature, the results have not shown mortality nor toxic effects on the development of larvae after exposure to rGO. Finally, it is important to highlight that the TiO₂-rGO catalysts were tested in the solar photodegradation of a toxic herbicide (2,4-Dichlorophenoxyacetic acid, 2,4-D), obtaining a high percentage of degradation; therefore, this alternative approach could be considered a good strategy to obtain performing photocatalysts.

Keywords: Nauplii, photocatalytic properties, reduced GO, short-term toxicity test, titanium dioxide

Procedia PDF Downloads 182
132 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 258
131 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 68
130 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
129 Analysis of Digital Transformation in Banking: The Hungarian Case

Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi

Abstract:

The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.

Keywords: big data, digital transformation, dynamic capabilities, mobile banking

Procedia PDF Downloads 64
128 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 125
127 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.

Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach

Procedia PDF Downloads 188
126 Cytotoxic Effects of Ag/TiO2 Nanoparticles on the Unicellular Organism Paramecium tetraurelia

Authors: Juan Bernal-Martinez, Zoe Quinones-Jurado, Miguel Waldo-Mendoza, Elias Perez

Abstract:

Introduction and Objective: Ag-TiO2 nanoparticles (NP) have been characterized as effective antibacterial compounds against E. aureous, E. coli, Salmonella and others. Because these nanoparticles have been used in plastic-food containers, there is a concern about the toxicity of Ag-TiO2 NP for higher organisms from protozoan, invertebrates, and mammals. The objective of this study is to evaluate the cytotoxic effect of Ag-TiO2 NP on the survival and swimming behavior of the unicellular organism Paramecium tetraurelia. Material and Methods: Preparation of metallic silver on TiO2 surface was based on chemical reduction route of AgNO3. Aqueous suspension of TiO2 nanoparticles was preparing by adding 5 g of TiO2 to 250 ml of deionized water and followed by sonication for 10 min. The required amount of AgNO3 solutions was added to TiO2 suspension, maintaining heating and stirring. Silver concentration was 0.5, 1.5, 5.0, 25, 35 and 45 % w/w versus TiO2. Paramecium tetraurelia (Carolina Biological, Cat. # 131560) was used as a biological preparation. It was cultured in artificial culture media made as follows: Stigmasterol 5 mg/ml of ethanol, Caseaminoacids 0.3 gr/lt.; KCl 4mM; CaCl2 1mM; MgCl2 100uM and MOPS 1mM, pH 7.3. This media was inoculated with Enterobacter-sp. Paramecium was concentrated after 24 hours of incubation by centrifugation. The pellet of cells was resuspended in 4.1.1 solution prepared as follows (in mM): KCl, 4 mM; CaCl2, 1mM and Trizma, 1mM; pH 7.3. Transmission electron microscopy (TEM) studies were performed to evaluate the appropriate dispersion and topographic distribution AgNPs deposited on TiO2. The experimental solutions were prepared as follows: 50 mg of Polyvinyhlpirolidone were added to 5 ml of 4.1.1. solution. Then, 50 mg of powder 25-Ag-TiO2 was added, mixing for 10 min and sonicated for 60 min. Survival of Paramecium and possible toxic effects after 25-Ag-TiO2 treatment was observed through an inverted microscope. The Paramecium swimming behavior and possible dead cells were recorded for periods of approximately 20-50 seconds by using a digital USB camera adapted to the microscope. Results and Discussion: TEM micrographs demonstrated the topographic distribution of AgNPs deposited on TiO2. 25Ag-TiO2 NP was efficiently dissolved and dispersed in 4.1.1 solution at concentrations from 0.1, 1 and 10 mg/ml. When Paramecium were treated with 25Ag-TiO2 NP at 100 ug/ml, it was observed that cells started swimming backwards. This backward swimming behavior is the typical avoiding reaction of the ciliate in response to a noxious stimulus. After 10 min of incubation, it was observed that Paramecium stopped swimming backwards and exploited. We can argue that this toxic effect of 25Ag-TiO2 NP is probably due to the calcium influx and calcium accumulation during the long-lasting swimming backwards. Conclusions: Here we have demonstrated that 25Ag-TiO2 NP has a specific toxic effect on an organism higher than bacteria such as the protozoan Paremecium. Probably these toxic phenomena could be expected to be observed in a higher organism such as invertebrates and mammals.

Keywords: Ag-TiO2, calcium permeability, cytotoxicity, paramecium

Procedia PDF Downloads 288
125 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 126