Search results for: neural style transfer for edge
1335 A Comprehensive Review on Health Hazards and Challenges for Microbial Remediation of Persistent Organic Pollutants
Authors: Nisha Gaur, K.Narasimhulu, Pydi Setty Yelamarthy
Abstract:
Persistent organic pollutants (POPs) have become a great concern due to their toxicity, transformation and bioaccumulation property. Therefore, this review highlights the types, sources, classification health hazards and mobility of organochlorine pesticides, industrial chemicals and their by-products. Moreover, with the signing of Aarhus and Stockholm convention on POPs there is an increased demand to identify and characterise such chemicals from industries and environment which are toxic in nature or to existing biota. Due to long life, persistent nature they enter into body through food and transfer to all tropic levels of ecological unit. In addition, POPs are lipophilic in nature and accumulate in lipid-containing tissues and organs which further indicates the adverse symptoms after the threshold limit. Though, several potential enzymes are reported from various categories of microorganism and their interaction with POPs may break down the complex compounds either through biodegradation, biostimulation or bioaugmentation process, however technological advancement and human activities have also indicated to explore the possibilities for the role of genetically modified organisms and metagenomics and metabolomics. Though many studies have been done to develop low cost, effective and reliable method for detection, determination and removal of ultra-trace concentration of persistent organic pollutants (POPs) but due to insufficient knowledge and non-feasibility of technique, the safe management of POPs is still a global challenge.Keywords: persistent organic pollutants, bioaccumulation, biostimulation, microbial remediation
Procedia PDF Downloads 2961334 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1291333 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air
Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli
Abstract:
Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.Keywords: air pollution, dust, numerical modeling, urban
Procedia PDF Downloads 1811332 Conceptual Model of a Residential Waste Collection System Using ARENA Software
Authors: Bruce G. Wilson
Abstract:
The collection of municipal solid waste at the curbside is a complex operation that is repeated daily under varying circumstances around the world. There have been several attempts to develop Monte Carlo simulation models of the waste collection process dating back almost 50 years. Despite this long history, the use of simulation modeling as a planning or optimization tool for waste collection is still extremely limited in practice. Historically, simulation modeling of waste collection systems has been hampered by the limitations of computer hardware and software and by the availability of representative input data. This paper outlines the development of a Monte Carlo simulation model that overcomes many of the limitations contained in previous models. The model uses a general purpose simulation software program that is easily capable of modeling an entire waste collection network. The model treats the stops on a waste collection route as a queue of work to be processed by a collection vehicle (or server). Input data can be collected from a variety of sources including municipal geographic information systems, global positioning system recorders on collection vehicles, and weigh scales at transfer stations or treatment facilities. The result is a flexible model that is sufficiently robust that it can model the collection activities in a large municipality, while providing the flexibility to adapt to changing conditions on the collection route.Keywords: modeling, queues, residential waste collection, Monte Carlo simulation
Procedia PDF Downloads 3991331 Efficient Liquid Desiccant Regeneration for Fresh Air Dehumidification Application
Authors: M. V. Rane, Tareke Tekia
Abstract:
Fresh Air Dehumidifier having a capacity of 1 TR has been developed by Heat Pump Laboratory at IITB. This fresh air dehumidifier is based on potassium formate liquid desiccant. The regeneration of the liquid desiccant can be done in two stages. The first stage of liquid desiccant regeneration involves the boiling of liquid desiccant inside the evacuated glass type solar thermal collectors. Further regeneration of liquid desiccant can be achieved using Low Temperature Regenerator, LTR. The coefficient of performance of the fresh air dehumidifier greatly depends on the performance of the major components such as high temperature regenerator, low temperature regenerator, fresh air dehumidifier, and solution heat exchangers. High effectiveness solution heat exchanger has been developed and tested. The solution heat exchanger is based on a patented aluminium extrusion with special passage geometry to enhance the heat transfer rate. Effectiveness up to 90% was achieved. Before final testing of the dehumidifier, major components have been tested individually. Testing of the solar thermal collector as hot water and steam generator reveals that efficiency up to 55% can be achieved. In this paper, the development of 1 TR fresh air dehumidifier with special focus on solution heat exchangers and solar thermal collector performance is presented.Keywords: solar, liquid desiccant, dehumidification, air conditioning, regeneration, coefficient of performance
Procedia PDF Downloads 1901330 Causes Analysis of Vacuum Consolidation Failure to Soft Foundation Filled by Newly Dredged Mud
Authors: Bao Shu-Feng, Lou Yan, Dong Zhi-Liang, Mo Hai-Hong, Chen Ping-Shan
Abstract:
For soft foundation filled by newly dredged mud, after improved by Vacuum Preloading Technology (VPT), the soil strength was increased only a little, the effective improved depth was small, and the ground bearing capacity is still low. To analyze the causes in depth, it was conducted in laboratory of several comparative single well model experiments of VPT. It was concluded: (1) it mainly caused serious clogging problem and poor drainage performance in vertical drains of high content of fine soil particles and strong hydrophilic minerals in dredged mud, too fast loading rate at the early stage of vacuum preloading (namely rapidly reaching-80kPa) and too small characteristic opening size of the filter of the existed vertical drains; (2) it commonly reduced the drainage efficiency of drainage system, in turn weaken vacuum pressure in soils and soil improvement effect of the greater partial loss and friction loss of vacuum pressure caused by larger curvature of vertical drains and larger transfer resistance of vacuum pressure in horizontal drain.Keywords: newly dredged mud, single well model experiments of vacuum preloading technology, poor drainage performance of vertical drains, poor soil improvement effect, causes analysis
Procedia PDF Downloads 2851329 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms
Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann
Abstract:
Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI
Procedia PDF Downloads 1791328 Fight against Money Laundering with Optical Character Recognition
Authors: Saikiran Subbagari, Avinash Malladhi
Abstract:
Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition
Procedia PDF Downloads 1441327 Challenges of New Technologies in the Field of Criminal Law: The Protection of the Right to Privacy in the Spanish Penal Code
Authors: Deborah Garcia-Magna
Abstract:
The use of new technologies has become widespread in the last decade, giving rise to various risks associated with the transfer of personal data and the publication of sensitive material on social media. There are already several supranational instruments that seek to protect the citizens involved in this growing traffic of personal information and, especially, the most vulnerable people, such as minors, who are also the ones who make the most intense use of these new means of communication. In this sense, the configuration of the concept of privacy as a legal right has necessarily been influenced by these new social uses and supranational instruments. The researcher considers correct the decision to introduce sexting as a new criminal behaviour in the Penal Code in 2015, but questions the concrete manner in which it has been made. To this end, an updated review of the various options that our legal system already offered is made, assessing whether these legal options adequately addressed the new social needs and guidelines from jurisprudence and other supranational instruments. Some important issues emerge as to whether the principles of fragmentarity and subsidiarity may be violated since the new article 197.7 of the Spanish Penal Code could refer to very varied behaviours and protect not only particularly vulnerable persons. In this sense, the research focuses on issues such as the concept of 'seriousness' of the infringement of privacy, the possible reckless conduct of the victim, who hang over its own private material to third parties, the affection to other legal rights such as freedom and sexual indemnity, the possible problems of concurrent offences, etc.Keywords: criminal law reform, ECHR jurisprudence, right to privacy, sexting
Procedia PDF Downloads 1921326 Control Power in Doubly Fed Induction Generator Wind Turbine with SVM Control Inverter
Authors: Zerzouri Nora, Benalia Nadia, Bensiali Nadia
Abstract:
This paper presents a grid-connected wind power generation scheme using Doubly Fed Induction Generator (DFIG). This can supply power at constant voltage and constant frequency with the rotor speed varying. This makes it suitable for variable speed wind energy application. The DFIG system consists of wind turbine, asynchronous wound rotor induction generator, and inverter with Space Vector Modulation (SVM) controller. In which the stator is connected directly to the grid and the rotor winding is in interface with rotor converter and grid converter. The use of back-to-back SVM converter in the rotor circuit results in low distortion current, reactive power control and operate at variable speed. Mathematical modeling of the DFIG is done in order to analyze the performance of the systems and they are simulated using MATLAB. The simulation results for the system are obtained and hence it shows that the system can operate at variable speed with low harmonic current distortion. The objective is to track and extract maximum power from the wind energy system and transfer it to the grid for useful work.Keywords: Doubly Fed Induction Generator, Wind Energy Conversion Systems, Space Vector Modulation, distortion harmonics
Procedia PDF Downloads 4821325 Learning the C-A-Bs: Resuscitation Training at Rwanda Military Hospital
Authors: Kathryn Norgang, Sarah Howrath, Auni Idi Muhire, Pacifique Umubyeyi
Abstract:
Description : A group of nurses address the shortage of trained staff to respond to critical patients at Rwanda Military Hospital (RMH) by developing a training program and a resuscitation response team. Members of the group who received the training when it first launched are now trainer of trainers; all components of the training program are organized and delivered by RMH staff-the clinical mentor only provides adjunct support. This two day training is held quarterly at RMH; basic life support and exposure to interventions for advanced care are included in the test and skills sign off. Seventy staff members have received the training this year alone. An increased number of admission/transfer to ICU due to successful resuscitation attempts is noted. Lessons learned: -Number of staff trained 2012-2014 (to be verified). -Staff who train together practice with greater collaboration during actual resuscitation events. -Staff more likely to initiate BLS if peer support is present-more staff trained equals more support. -More access to Advanced Cardiac Life Support training is necessary now that the cadre of BLS trained staff is growing. Conclusions: Increased access to training, peer support, and collaborative practice are effective strategies to strengthening resuscitation capacity within a hospital.Keywords: resuscitation, basic life support, capacity building, resuscitation response teams, nurse trainer of trainers
Procedia PDF Downloads 3021324 Investigation on the Cooling Performance of Cooling Channels Fabricated via Selective Laser Melting for Injection Molding
Authors: Changyong Liu, Junda Tong, Feng Xu, Ninggui Huang
Abstract:
In the injection molding process, the performance of cooling channels is crucial to the part quality. Through the application of conformal cooling channels fabricated via metal additive manufacturing, part distortion, warpage can be greatly reduced and cycle time can be greatly shortened. However, the properties of additively manufactured conformal cooling channels are quite different from conventional drilling processes such as the poorer dimensional accuracy and larger surface roughness. These features have significant influences on its cooling performance. In this study, test molds with the cooling channel diameters of φ2 mm, φ3 mm and φ4 mm were fabricated via selective laser melting and conventional drilling process respectively. A test system was designed and manufactured to measure the pressure difference between the channel inlet and outlet, the coolant flow rate and the temperature variation during the heating process. It was found that the cooling performance of SLM-fabricated channels was poorer than drilled cooling channels due to the smaller sectional area of cooling channels resulted from the low dimensional accuracy and the unmolten particles adhered to the channel surface. Theoretical models were established to determine the friction factor and heat transfer coefficient of SLM-fabricated cooling channels. These findings may provide guidance to the design of conformal cooling channels.Keywords: conformal cooling channels, selective laser melting, cooling performance, injection molding
Procedia PDF Downloads 1481323 Functional Gene Expression in Human Cells Using Linear Vectors Derived from Bacteriophage N15 Processing
Authors: Kumaran Narayanan, Pei-Sheng Liew
Abstract:
This paper adapts the bacteriophage N15 protelomerase enzyme to assemble linear chromosomes as vectors for gene expression in human cells. Phage N15 has the unique ability to replicate as a linear plasmid with telomeres in E. coli during its prophage stage of life-cycle. The virus-encoded protelomerase enzyme cuts its circular genome and caps its ends to form hairpin telomeres, resulting in a linear human-chromosome-like structure in E. coli. In mammalian cells, however, no enzyme with TelN-like activities has been found. In this work, we show for the first-time transfer of the protelomerase from phage into human and mouse cells and demonstrate recapitulation of its activity in these hosts. The function of this enzyme is assayed by demonstrating cleavage of its target DNA, followed by detecting telomere formation based on its resistance to recBCD enzyme digestion. We show protelomerase expression persists for at least 60 days, which indicates limited silencing of its expression. Next, we show that an intact human β-globin gene delivered on this linear chromosome accurately retains its expression in the human cellular environment for at least 60 hours, demonstrating its stability and potential as a vector. These results demonstrate that the N15 protelomerse is able to function in mammalian cells to cut and heal DNA to create telomeres, which provides a new tool for creating novel structures by DNA resolution in these hosts.Keywords: chromosome, beta-globin, DNA, gene expression, linear vector
Procedia PDF Downloads 1901322 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal
Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes
Abstract:
Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model
Procedia PDF Downloads 2471321 Experimental Quantification and Modeling of Dissolved Gas during Hydrate Crystallization: CO₂ Hydrate Case
Authors: Amokrane Boufares, Elise Provost, Veronique Osswald, Pascal Clain, Anthony Delahaye, Laurence Fournaison, Didier Dalmazzone
Abstract:
Gas hydrates have long been considered as problematic for flow assurance in natural gas and oil transportation. On the other hand, they are now seen as future promising materials for various applications (i.e. desalination of seawater, natural gas and hydrogen storage, gas sequestration, gas combustion separation and cold storage and transport). Nonetheless, a better understanding of the crystallization mechanism of gas hydrate and of their formation kinetics is still needed for a better comprehension and control of the process. To that purpose, measuring the real-time evolution of the dissolved gas concentration in the aqueous phase during hydrate formation is required. In this work, CO₂ hydrates were formed in a stirred reactor equipped with an Attenuated Total Reflection (ATR) probe coupled to a Fourier Transform InfraRed (FTIR) spectroscopy analyzer. A method was first developed to continuously measure in-situ the CO₂ concentration in the liquid phase during solubilization, supersaturation, hydrate crystallization and dissociation steps. Thereafter, the measured concentration data were compared with those of equilibrium concentrations. It was observed that the equilibrium is instantly reached in the liquid phase due to the fast consumption of dissolved gas by the hydrate crystallization. Consequently, it was shown that hydrate crystallization kinetics is limited by the gas transfer at the gas-liquid interface. Finally, we noticed that the liquid-hydrate equilibrium during the hydrate crystallization is governed by the temperature of the experiment under the tested conditions.Keywords: gas hydrate, dissolved gas, crystallization, infrared spectroscopy
Procedia PDF Downloads 2801320 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1531319 Increment of Panel Flutter Margin Using Adaptive Stiffeners
Authors: S. Raja, K. M. Parammasivam, V. Aghilesh
Abstract:
Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape
Procedia PDF Downloads 2911318 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 621317 Adjustment with Changed Lifestyle at Old Age Homes: A Perspective of Elderly in India
Authors: Priyanka V. Janbandhu, Santosh B. Phad, Dhananjay W. Bansod
Abstract:
The current changing scenario of the family is a compelling aged group not only to be alone in a nuclear family but also to join the old age institutions. The consequences of it are feeling of neglected or left alone by the children, adding a touch of helpless in the absence of lack of expected care and support. The accretion of all these feelings and unpleasant events ignite a question in their mind that – who is there for me? The efforts have taken to highlight the issues of the elderly after joining the old age home and their perception about the current life as an institutional inmate. This attempt to cover up the condition, adjustment, changed lifestyle and perspective in the association with several issues of the elderly, which have an essential effect on their well-being. The present research piece has collected the information about institutionalized elderly with the help of a semi-structured questionnaire. This study interviewed 500 respondents from 22 old age homes of Pune city of Maharashtra State, India. This data collection methodology consists of Multi-stage random sampling. In which the stratified random sampling adopted for the selection of old age homes and sample size determination, sample selection probability proportional to the size and simple random sampling techniques implemented. The study provides that around five percent of the elderly shifted to old age home along with their spouse, whereas ten percent of the elderly are staying away from their spouse. More than 71 percent of the elderly have children, and they are an involuntary inmate of the old age institution, even less than one-third of the elderly consulted to the institution before the joining it. More than sixty percent of the elderly have children, but they joined institution due to the unpleasant response of their children only. Around half of the elderly responded that there are issues while adjusting to this environment, many of them are still persistent. At least one elderly out of ten is there who is suffering from the feeling of loneliness and left out by children and other family members. In contrast, around 97 percent of the elderly are very happy or satisfied with the institutional facilities. It illustrates that the issues are associated with their children and other family members, even though they left their home before a year or more. When enquired about this loneliness feeling few of them are suffering from it before leaving their homes, it was due to lack of interaction with children, as they are too busy to have time for the aged parents. Additionally, the conflicts or fights within the family due to the presence of old persons in the family contributed to establishing another feeling of insignificance among the elderly parents. According to these elderly, have more than 70 percent of the share, the children are ready to spend money indirectly for us through these institutions, but not prepared to provide some time and very few amounts of all this expenditure directly for us.Keywords: elderly, old age homes, life style changes and adjustment, India
Procedia PDF Downloads 1331316 Weighted Data Replication Strategy for Data Grid Considering Economic Approach
Authors: N. Mansouri, A. Asadi
Abstract:
Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.Keywords: data grid, data replication, simulation, replica selection, replica placement
Procedia PDF Downloads 2601315 Angiomotin Regulates Integrin Beta 1-Mediated Endothelial Cell Migration and Angiogenesis
Authors: Yuanyuan Zhang, Yujuan Zheng, Giuseppina Barutello, Sumako Kameishi, Kungchun Chiu, Katharina Hennig, Martial Balland, Federica Cavallo, Lars Holmgren
Abstract:
Angiogenesis describes that new blood vessels migrate from pre-existing ones to form 3D lumenized structure and remodeling. During directional migration toward the gradient of pro-angiogenic factors, the endothelial cells, especially the tip cells need filopodia to sense the environment and exert the pulling force. Of particular interest are the integrin proteins, which play an essential role in focal adhesion in the connection between migrating cells and extracellular matrix (ECM). Understanding how these biomechanical complexes orchestrate intrinsic and extrinsic forces is important for our understanding of the underlying mechanisms driving angiogenesis. We have previously identified Angiomotin (Amot), a member of Amot scaffold protein family, as a promoter for endothelial cell migration in vitro and zebrafish models. Hence, we established inducible endothelial-specific Amot knock-out mice to study normal retinal angiogenesis as well as tumor angiogenesis. We found that the migration ratio of the blood vessel network to the edge was significantly decreased in Amotec- retinas at postnatal day 6 (P6). While almost all the Amot defect tip cells lost migration advantages at P7. In consistence with the dramatic morphology defect of tip cells, there was a non-autonomous defect in astrocytes, as well as the disorganized fibronectin expression pattern correspondingly in migration front. Furthermore, the growth of transplanted LLC tumor was inhibited in Amot knockout mice due to fewer vasculature involved. By using MMTV-PyMT transgenic mouse model, there was a significantly longer period before tumors arised when Amot was specifically knocked out in blood vessels. In vitro evidence showed that Amot binded to beta-actin, Integrin beta 1 (ITGB1), Fibronectin, FAK, Vinculin, major focal adhesion molecules, and ITGB1 and stress fibers were distinctly induced by Amot transfection. Via traction force microscopy, the total energy (force indicater) was found significantly decreased in Amot knockdown cells. Taken together, we propose that Amot is a novel partner of the ITGB1/Fibronectin protein complex at focal adhesion and required for exerting force transition between endothelial cell and extracellular matrix.Keywords: angiogenesis, angiomotin, endothelial cell migration, focal adhesion, integrin beta 1
Procedia PDF Downloads 2351314 Effect of Lifestyle Modification for Two Years on Obesity and Metabolic Syndrome Components in Elementary Students: A Community-Based Trial
Authors: Bita Rabbani, Hossein Chiti, Faranak Sharifi, Saeedeh Mazloomzadeh
Abstract:
Background: Lifestyle modifications, especially improving nutritional patterns and increasing physical activity, are the most important factors in preventing obesity and metabolic syndrome in children and adolescents. For this purpose, the following interventional study was designed to investigate the effects of educational programs for students, as well as changes in diet and physical activity, on obesity and components of the metabolic syndrome. Methods: This study is part of an interventional research project (elementary school) conducted on all students of Sama schools in Zanjan and Abhar in three levels of elementary, middle, and high school, including 1000 individuals in Zanjan (intervention group) and 1000 individuals (control group) in Abhar in 2011. Interventions were based on educating students, teachers, and parents, changes in food services, and physical activity. We primarily measured anthropometric indices, fasting blood sugar, lipid profiles, and blood pressure and completed standard nutrition and physical activity questionnaires. Also, blood insulin levels were randomly measured in a number of students. Data analysis was done by SPSS software version 16.0. Results: Overall, 589 individuals (252 male, 337 female) entered the case group, and 803 individuals (344 male, 459 female) entered the control group. After two years of intervention, mean waist circumference (63.8 ± 10.9) and diastolic BP (63.8 ± 10.4) were significantly lower; however, mean systolic BP (10.1.0 ± 12.5), food score (25.0 ± 5.0) and drinking score (12.1 ± 2.3) were higher in the intervention group (p<0.001). Comparing components of metabolic syndrome between the second year and at time of recruitment within the intervention group showed that although number of overweight/obese individuals, individuals with hypertriglyceridemia and high LDL increased, abdominal obesity, high BP, hyperglycemia, and insulin resistance decreased (p<0.001). On the other hand, in the control group, number of individuals with high BP increased significantly. Conclusion: The prevalence of abdominal obesity and hypertension, which are two major components of metabolic syndrome, are much higher in our study than in other regions of country. However, interventions for modification of diet and increase in physical activity are effective in lowering their prevalence.Keywords: metabolic syndrome, obesity, life style, nutrition, hypertension
Procedia PDF Downloads 661313 Explainable Deep Learning for Neuroimaging: A Generalizable Approach for Differential Diagnosis of Brain Diseases
Authors: Nighat Bibi
Abstract:
The differential diagnosis of brain diseases by magnetic resonance imaging (MRI) is a crucial step in the diagnostic process, and deep learning (DL) has the potential to significantly improve the accuracy and efficiency of these diagnoses. This study focuses on creating an ensemble learning (EL) model that utilizes the ResNet50, DenseNet121, and EfficientNetB1 architectures to concurrently and accurately classify various brain conditions from MRI images. The proposed ensemble learning model identifies a range of brain disorders that encompass different types of brain tumors, as well as multiple sclerosis. The proposed model was trained on two open-source datasets, consisting of MRI images of glioma, meningioma, pituitary tumors, and multiple sclerosis. Central to this research is the integration of gradient-weighted class activation mapping (Grad-CAM) for model interpretability, aligning with the growing emphasis on explainable AI (XAI) in medical imaging. The application of Grad-CAM improves the transparency of the model's decision-making process, which is vital for clinical acceptance and trust in AI-assisted diagnostic tools. The EL model achieved an impressive 99.84% accuracy in classifying these various brain conditions, demonstrating its potential as a versatile and effective tool for differential diagnosis in neuroimaging. The model’s ability to distinguish between multiple brain diseases underscores its significant potential in the field of medical imaging. Additionally, Grad-CAM visualizations provide deeper insights into the neural network’s reasoning, contributing to a more transparent and interpretable AI-driven diagnostic process in neuroimaging.Keywords: brain tumour, differential diagnosis, ensemble learning, explainability, grad-cam, multiple sclerosis
Procedia PDF Downloads 61312 Open Education Resources a Gateway for Accessing Hospitality and Tourism Learning Materials
Authors: Isiya Shinkafi Salihu
Abstract:
Open education resources (OER) are open learning materials in different formats, course content and context to support learning globally. This study investigated the level of awareness of Hospitality and Tourism OER among students in the Department of Tourism and Hotel Management in a University. Specifically, it investigated students’ awareness, use and accessibility of OER in learning. The research design method used was the quantitative approach, using an online questionnaire. The thesis research shows that respondents frequently use OER but with little knowledge of the content and context of the material. Most of the respondents’ have little knowledge about the concept even though they use it. Information and communication technologies are tools for information gathering, social networking and knowledge sharing and transfer. OER are open education materials accessible online such as curriculum, maps, course materials, and videos that users create, adapt, reuse for learning and research. Few of the respondents that used OER in learning faced some challenges such as high cost of data, poor connectivity and lack of proper guidance. The results suggest a lack of awareness of OER among students in the faculty of tourism and the need for support from the teachers in the utilization of OER. The thesis also reveals that some of the international students are accessing the internet as beginners in their studies which require guidance. The research, however, recommends that further studies should be conducted to other faculties.Keywords: creative commons, open education resources, open licenses, information and communication technology
Procedia PDF Downloads 1761311 Optimizing Fire Tube Boiler Design for Efficient Saturated Steam Production at 2000kg/h
Authors: Yoftahe Nigussie Worku
Abstract:
This study focused on designing a Fire tube boiler to generate saturated steam with a 2000kg/h capacity at a 12bar design pressure. The primary project goal is to achieve efficient steam production while minimizing costs. This involves selecting suitable materials for component parts, employing cost-effective construction methods, and optimizing various parameters. The analysis phase employs iterative processes and relevant formulas to determine key design parameters. This includes optimizing the diameter of tubes for overall heat transfer coefficient, considering a two-pass configuration due to tube and shell size, and using heavy oil fuel no.6 with specific heating values. The designed boiler consumes 140.37kg/hr of fuel, producing 1610kw of heat at an efficiency of 85.25%. The fluid flow is configured as cross flow, leveraging its inherent advantages. The tube arrangement involves welding the tubes inside the shell, which is connected to the tube sheet using a combination of gaskets and welding. The design of the shell adheres to the European Standard code for pressure vessels, accounting for weight and supplementary accessories and providing detailed drawings for components like lifting lugs, openings, ends, manholes, and supports.Keywords: efficiency, coefficient, saturated steam, fire tube
Procedia PDF Downloads 571310 Multi-Criteria Decision-Making in Ranking Drinking Water Supply Options (Case Study: Tehran City)
Authors: Mohsen Akhlaghi, Tahereh Ebrahimi
Abstract:
Considering the increasing demand for water and limited resources, there is a possibility of a water crisis in the not-so-distant future. Therefore, to prevent this crisis, other options for drinking water supply should be examined. In this regard, the application of multi-criteria decision-making methods in various aspects of water resource management and planning has always been of great interest to researchers. In this report, six options for supplying drinking water to Tehran City were considered. Then, experts' opinions were collected through matrices and questionnaires, and using the TOPSIS method, which is one of the types of multi-criteria decision-making methods, they were calculated and analyzed. In the TOPSIS method, the options were ranked by calculating their proximity to the ideal (Ci). The closer the numerical value of Ci is to one, the more desirable the option is. Based on this, the option with the optimization pattern of water consumption, with Ci = 0.9787, is the best option among the proposed options for supplying drinking water to Tehran City. The other options, in order of priority, are rainwater harvesting, wastewater reuse, increasing current water supply sources, desalination and its transfer, and transferring water from freshwater sources between basins. In conclusion, the findings of this study highlight the importance of exploring alternative drinking water supply options and utilizing multi-criteria decision-making approaches to address the potential water crisis.Keywords: multi-criteria decision, sustainable development, topsis, water supply
Procedia PDF Downloads 641309 From Research to Practice: Upcycling Cinema Icons
Authors: Mercedes Rodriguez Sanchez, Laura Luceño Casals
Abstract:
With the rise of social media, creative people and brands everywhere are constantly generating content. The students with Bachelor's Degrees in Fashion Design use platforms such as Instagram or TikTok to look for inspiration and entertainment, as well as a way to develop their own ideas and share them with a wide audience. Information and Communications Technologies (ICT) have become a central aspect of higher education, virtually affecting every aspect of the student experience. Following the current trend, during the first semester of the second year, a collaborative project across two subjects –Design Management and History of Fashion Design– was implemented. After an introductory class focused on the relationship between fashion and cinema, as well as a brief history of 20th-century fashion, the students freely chose a work team and an iconic look from a movie costume. They researched the selected movie and its sociocultural context, analyzed the costume and the work of the designer, and studied the style, fashion magazines and most popular films of the time. Students then redesigned and recreated the costume, for which they were compelled to recycle the materials they had available at home as an unavoidable requirement of the activity. Once completed the garment, students delivered in-class, team-based presentations supported by the final design, a project summary poster and a making-of video, which served as a documentation tool of the costume design process. The methodologies used include Challenge-Based Learning (CBL), debates, Internet research, application of Information and Communications Technologies, and viewing clips of classic films, among others. After finishing the projects, students were asked to complete two electronic surveys to measure the acquisition of transversal and specific competencies of each subject. Results reveal that this activity helped the students' knowledge acquisition, a deeper understanding of both subjects and their skills development. The classroom dynamic changed. The multidisciplinary approach encouraged students to collaborate with their peers, while educators were better able to keep students' interest and promote an engaging learning process. As a result, the activity discussed in this paper confirmed the research hypothesis: it is positive to propose innovative teaching projects that combine academic research with playful learning environments.Keywords: cinema, cooperative learning, fashion design, higher education, upcycling
Procedia PDF Downloads 771308 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 1011307 High Power Thermal Energy Storage for Industrial Applications Using Phase Change Material Slurry
Authors: Anastasia Stamatiou, Markus Odermatt, Dominic Leemann, Ludger J. Fischer, Joerg Worlitschek
Abstract:
The successful integration of thermal energy storage in industrial processes is expected to play an important role in the energy turnaround. Latent heat storage technologies can offer more compact thermal storage at a constant temperature level, in comparison to conventional, sensible thermal storage technologies. The focus of this study is the development of latent heat storage solutions based on the Phase Change Slurry (PCS) concept. Such systems promise higher energy densities both as refrigerants and as storage media while presenting better heat transfer characteristics than conventional latent heat storage technologies. This technology is expected to deliver high thermal power and high-temperature stability which makes it ideal for storage of process heat. An evaluation of important batch processes in industrial applications set the focus on materials with a melting point in the range of 55 - 90 °C. Aluminium ammonium sulfate dodecahydrate (NH₄Al(SO₄)₂·12H₂O) was chosen as the first interesting PCM for the next steps of this study. The ability of this material to produce slurries at the relevant temperatures was demonstrated in a continuous mode in a laboratory test-rig. Critical operational and design parameters were identified.Keywords: esters, latent heat storage, phase change materials, thermal properties
Procedia PDF Downloads 2961306 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 18