Search results for: SPE’s comparative solution projects
1152 Evidence of Microplastics Ingestion in Two Commercial Cephalopod Species: Octopus Vulgaris and Sepia Officinalis
Authors: Federica Laface, Cristina Pedà, Francesco Longo, Francesca de Domenico, Riccardo Minichino, Pierpaolo Consoli, Pietro Battaglia, Silvestro Greco, Teresa Romeo
Abstract:
Plastics pollution represents one of the most important threats to marine biodiversity. In the last decades, different species are investigated to evaluate the extent of the plastic ingestion phenomenon. Even if the cephalopods play an important role in the food chain, they are still poorly studied. The aim of this research was to investigate the plastic ingestion in two commercial cephalopod species from the southern Tyrrhenian Sea: the common octopus, Octopus vulgaris (n=6; mean mantle length ML 10.7 ± 1.8) and the common cuttlefish, Sepia officinalis (n=13; mean ML 13.2 ± 1.7). Plastics were extracted from the filters obtained by the chemical digestion of cephalopods gastrointestinal tracts (GITs), using 10% potassium hydroxide (KOH) solution in a 1:5 (w/v) ratio. Once isolated, particles were photographed, measured, and their size class, shape and color were recorded. A total of 81 items was isolated from 16 of the 19 examined GITs, representing a total occurrence (%O) of 84.2% with a mean value of 4.3 ± 8.6 particles per individual. In particular, 62 plastics were found in 6 specimens of O. vulgaris (%O=100) and 19 particles in 10 S. officinalis (%O=94.7). In both species, the microplastics size class was the most abundant (93.8%). Plastic items found in O. vulgaris were mainly fibers (61%) while fragments were the most frequent in S. officinalis (53%). Transparent was the most common color in both species. The analysis will be completed by Fourier transform infrared (FT-IR) spectroscopy technique in order to identify polymers nature. This study reports preliminary data on plastic ingestion events in two cephalopods species and represents the first record of plastic ingestion by the common octopus. Microplastic items detected in both common octopus and common cuttlefish could derive from secondary and/or accidental ingestion events, probably due to their behavior, feeding habits and anatomical features. Further studies will be required to assess the effect of marine litter pollution in these ecologically and commercially important species.Keywords: cephalopods, GIT analysis, marine pollution, Mediterranean sea, microplastics
Procedia PDF Downloads 2571151 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory
Authors: Kiana Zeighami, Morteza Ozlati Moghadam
Abstract:
Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping
Procedia PDF Downloads 2081150 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management
Authors: Berk Ecer, Ebru Akcapinar Sezer
Abstract:
Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach
Procedia PDF Downloads 1401149 Eco-Efficient Cementitious Materials for Construction Applications in Ireland
Authors: Eva Ujaczki, Rama Krishna Chinnam, Ronan Courtney, Syed A. M. Tofail, Lisa O'Donoghue
Abstract:
Concrete is the second most widely used material in the world and is made of cement, sand, and aggregates. Cement is a hydraulic binder which reacts with water to form a solid material. In the cement manufacturing process, the right mix of minerals from mined natural rocks, e.g., limestone is melted in a kiln at 1450 °C to form a new compound, clinker. In the final stage, the clinker is milled into a fine cement powder. The principal cement types manufactured in Ireland are: 1) CEM I – Portland cement; 2) CEM II/A – Portland-fly ash cement; 3) CEM II/A – Portland-limestone cement and 4) CEM III/A – Portland-round granulated blast furnace slag (GGBS). The production of eco-efficient, blended cement (CEM II, CEM III) reduces CO₂ emission and improves energy efficiency compared to traditional cements. Blended cements are produced locally in Ireland and more than 80% of produced cement is blended. These eco-efficient, blended cements are a relatively new class of construction materials and a kind of geopolymer binders. From a terminological point of view, geopolymer cement is a binding system that is able to harden at room temperature. Geopolymers do not require calcium-silicate-hydrate gel but utilize the polycondensation of SiO₂ and Al₂O₃ precursors to achieve a superior strength level. Geopolymer materials are usually synthesized using an aluminosilicate raw material and an activating solution which is mainly composed of NaOH or KOH and Na₂SiO₃. Cement is the essential ingredient in concrete which is vital for economic growth of countries. The challenge for the global cement industry is to reach to increasing demand at the same time recognize the need for sustainable usage of resources. Therefore, in this research, we investigated the potential for Irish wastes to be used in geopolymer cement type applications through a national stakeholder workshop with the Irish construction sector and relevant stakeholders. This paper aims at summarizing Irish stakeholder’s perspective for introducing new secondary raw materials, e.g., bauxite residue or increasing the fly ash addition into cement for eco-efficient cement production.Keywords: eco-efficient, cement, geopolymer, blending
Procedia PDF Downloads 1661148 Preparation and in vivo Assessment of Nystatin-Loaded Solid Lipid Nanoparticles for Topical Delivery against Cutaneous Candidiasis
Authors: Rawia M. Khalil, Ahmed A. Abd El Rahman, Mahfouz A. Kassem, Mohamed S. El Ridi, Mona M. Abou Samra, Ghada E. A. Awad, Soheir S. Mansy
Abstract:
Solid lipid nanoparticles (SLNs) have gained great attention for the topical treatment of skin associated fungal infection as they facilitate the skin penetration of loaded drugs. Our work deals with the preparation of nystatin loaded solid lipid nanoparticles (NystSLNs) using the hot homogenization and ultrasonication method. The prepared NystSLNs were characterized in terms of entrapment efficiency, particle size, zeta potential, transmission electron microscopy, differential scanning calorimetry, rheological behavior and in vitro drug release. A stability study for 6 months was performed. A microbiological study was conducted in male rats infected with Candida albicans, by counting the colonies and examining the histopathological changes induced on the skin of infected rats. The results showed that SLNs dispersions are spherical in shape with particle size ranging from 83.26±11.33 to 955.04±1.09 nm. The entrapment efficiencies are ranging from 19.73±1.21 to 72.46±0.66% with zeta potential ranging from -18.9 to -38.8 mV and shear-thinning rheological Behavior. The stability studies done for 6 months showed that nystatin (Nyst) is a good candidate for topical SLN formulations. A least number of colony forming unit/ ml (cfu/ml) was recorded for the selected NystSLN compared to the drug solution and the commercial Nystatin® cream present in the market. It can be fulfilled from this work that SLNs provide a good skin targeting effect and may represent promising carrier for topical delivery of Nyst offering the sustained release and maintaining the localized effect, resulting in an effective treatment of cutaneous fungal infection.Keywords: candida infections, hot homogenization, nystatin, solid lipid nanoparticles, stability, topical delivery
Procedia PDF Downloads 3931147 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution
Authors: Yasmin Begum Mohammed
Abstract:
Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique
Procedia PDF Downloads 2951146 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation
Authors: Montree Bunruanses, Preecha Yupapin
Abstract:
In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate
Procedia PDF Downloads 781145 Fabrication of Electrospun Microbial Siderophore-Based Nanofibers: A Wound Dressing Material to Inhibit the Wound Biofilm Formation
Authors: Sita Lakshmi Thyagarajan
Abstract:
Nanofibers will leave no field untouched by its scientific innovations; the medical field is no exception. Electrospinning has proven to be an excellent method for the synthesis of nanofibers which, have attracted the interest for many biomedical applications. The formation of biofilms in wounds often leads to chronic infections that are difficult to treat with antibiotics. In order to minimize the biofilms and enhance the wound healing, preparation of potential nanofibers was focused. In this study, siderophore incorporated nanofibers were electrospun using biocompatible polymers onto the collagen scaffold and were fabricated into a biomaterial suitable for the inhibition of biofilm formation. The purified microbial siderophore was blended with Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO in a suitable solvent. Fabrication of siderophore blended nanofibers onto the collagen surface was done using standard protocols. The fabricated scaffold was subjected to physical-chemical characterization. The results indicated that the fabrication processing parameters of nanofiberous scaffold was found to possess the characteristics expected of the potential scaffold with nanoscale morphology and microscale arrangement. The influence of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO solution concentration, applied voltage, tip-to-collector distance, feeding rate, and collector speed were studied. The optimal parameters such as the ratio of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO concentration, applied voltage, tip-to-collector distance, feeding rate, collector speed were finalized based on the trial and error experiments. The fibers were found to have a uniform diameter with an aligned morphology. The overall study suggests that the prepared siderophore entrapped nanofibers could be used as a potent tool for wound dressing material for inhibition of biofilm formation.Keywords: biofilms, electrospinning, nano-fibers, siderophore, tissue engineering scaffold
Procedia PDF Downloads 1241144 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 311143 Energy Trading for Cooperative Microgrids with Renewable Energy Resources
Authors: Ziaullah, Shah Wahab Ali
Abstract:
Micro-grid equipped with heterogeneous energy resources present the idea of small scale distributed energy management (DEM). DEM helps in minimizing the transmission and operation costs, power management and peak load demands. Micro-grids are collections of small, independent controllable power-generating units and renewable energy resources. Micro-grids also motivate to enable active customer participation by giving accessibility of real-time information and control to the customer. The capability of fast restoration against faulty situation, integration of renewable energy resources and Information and Communication Technologies (ICT) make micro-grid as an ideal system for distributed power systems. Micro-grids can have a bank of energy storage devices. The energy management system of micro-grid can perform real-time energy forecasting of renewable resources, energy storage elements and controllable loads in making proper short-term scheduling to minimize total operating costs. We present a review of existing micro-grids optimization objectives/goals, constraints, solution approaches and tools used in micro-grids for energy management. Cost-benefit analysis of micro-grid reveals that cooperation among different micro-grids can play a vital role in the reduction of import energy cost and system stability. Cooperative micro-grids energy trading is an approach to electrical distribution energy resources that allows local energy demands more control over the optimization of power resources and uses. Cooperation among different micro-grids brings the interconnectivity and power trading issues. According to the literature, it shows that open area of research is available for cooperative micro-grids energy trading. In this paper, we proposed and formulated the efficient energy management/trading module for interconnected micro-grids. It is believed that this research will open new directions in future for energy trading in cooperative micro-grids/interconnected micro-grids.Keywords: distributed energy management, information and communication technologies, microgrid, energy management
Procedia PDF Downloads 3751142 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe
Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis
Abstract:
The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM
Procedia PDF Downloads 4271141 Trip Reduction in Turbo Machinery
Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto
Abstract:
Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBTKeywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start
Procedia PDF Downloads 771140 Cupric Oxide Thin Films for Optoelectronic Application
Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch
Abstract:
Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.Keywords: absorber material, cupric oxide, dip coating, thin film
Procedia PDF Downloads 3101139 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 691138 Environmental Performance Measurement for Network-Level Pavement Management
Authors: Jessica Achebe, Susan Tighe
Abstract:
The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.Keywords: pavement management, sustainability, network-level evaluation, environment measures
Procedia PDF Downloads 2121137 Integration of the Battery Passport into the eFTI Platform to Improve Digital Data Exchange in the Context of Battery Transport
Authors: Max Plotnikov, Arkadius Schier
Abstract:
To counteract climate change, the European Commission adopted the European Green Deal (EDG) in 2019. Some of the main objectives of the EDG are climate neutrality by 2050, decarbonization, sustainable mobility, and the shift from a linear economy to a circular economy in the European Union. The mobility turnaround envisages, among other things, the switch from classic internal combustion vehicles to electromobility. The aforementioned goals are therefore accompanied by increased demand for lithium-ion batteries (LIBs) and the associated logistics. However, this inevitably gives rise to challenges that need to be addressed. Depending on whether the LIB is transported by road, rail, air, or sea, there are different regulatory frameworks in the European Union that relevant players in the value chain must adhere to. LIBs are classified as Dangerous Goods Class 9, and against this backdrop, there are various restrictions that need to be adhered to when transporting them for various actors. Currently, the exchange of information in the value chain between the various actors is almost entirely paper-based. Especially in the transport of dangerous goods, this often leads to a delay in the transport or to incorrect data. The exchange of information with the authorities is particularly essential in this context. A solution for the digital exchange of information is currently being developed. Electronic freight transport information (eFTI) enables fast and secure exchange of information between the players in the freight transport process. This concept is to be used within the supply chain from 2025. Another initiative that is expected to improve the monitoring of LIB in this context, among other things, is the battery pass. In July 2023, the latest battery regulation was adopted in the Official Journal of the European Union. This battery pass gives different actors static as well as dynamic information about the batteries depending on their access rights. This includes master data such as battery weight or battery category or information on the state of health or the number of negative events that the battery has experienced. The integration of the battery pass with the eFTI platform will be investigated for synergy effects in favor of the actors for battery transport.Keywords: battery logistics, battery passport, data sharing, eFTI, sustainability
Procedia PDF Downloads 811136 Utilizing Dowel-Laminated Mass Timber Components in Residential Multifamily Structures: A Case Study
Authors: Theodore Panton
Abstract:
As cities in the United States experience critical housing shortages, mass timber presents the opportunity to address this crisis in housing supply while taking advantage of the carbon-positive benefits of sustainably forested wood fiber. Mass timber, however, currently has a low level of adoption in residential multifamily structures due to the risk-averse nature of change within the construction financing, Architecture / Engineering / Contracting (AEC) communities, as well as various agency approval challenges. This study demonstrates how mass timber can be used within the cost and feasibility parameters of a typical multistory residential structure and ultimately address the need for dense urban housing. This study will utilize The Garden District, a mixed-use market-rate housing project in Woodinville, Washington, as a case study to illuminate the potential of mass timber in this application. The Garden District is currently in final stages of permit approval and will commence construction in 2023. It will be the tallest dowel-laminated timber (DLT) residential structure in the United States when completed. This case study includes economic, technical, and design reference points to demonstrate the relevance of the use of this system and its ability to deliver “triple bottom line” results. In terms of results, the study establishes scalable and repeatable approaches to project design and delivery of mass timber in multifamily residential uses and includes economic data, technical solutions, and a summary of end-user advantages. This study discusses the third party tested systems for satisfying acoustical requirements within dwelling units, a key to resolving the use of mass timber within multistory residential use. Lastly, the study will also compare the mass timber solution with a comparable cold formed steel (CFS) system with a similar program, which indicates a net carbon savings of over three million tons over the life cycle of the building.Keywords: DLT, dowell laminated timber, mass timber, market rate multifamily
Procedia PDF Downloads 1221135 Enhancing Robustness in Federated Learning through Decentralized Oracle Consensus and Adaptive Evaluation
Authors: Peiming Li
Abstract:
This paper presents an innovative blockchain-based approach to enhance the reliability and efficiency of federated learning systems. By integrating a decentralized oracle consensus mechanism into the federated learning framework, we address key challenges of data and model integrity. Our approach utilizes a network of redundant oracles, functioning as independent validators within an epoch-based training system in the federated learning model. In federated learning, data is decentralized, residing on various participants' devices. This scenario often leads to concerns about data integrity and model quality. Our solution employs blockchain technology to establish a transparent and tamper-proof environment, ensuring secure data sharing and aggregation. The decentralized oracles, a concept borrowed from blockchain systems, act as unbiased validators. They assess the contributions of each participant using a Hidden Markov Model (HMM), which is crucial for evaluating the consistency of participant inputs and safeguarding against model poisoning and malicious activities. Our methodology's distinct feature is its epoch-based training. An epoch here refers to a specific training phase where data is updated and assessed for quality and relevance. The redundant oracles work in concert to validate data updates during these epochs, enhancing the system's resilience to security threats and data corruption. The effectiveness of this system was tested using the Mnist dataset, a standard in machine learning for benchmarking. Results demonstrate that our blockchain-oriented federated learning approach significantly boosts system resilience, addressing the common challenges of federated environments. This paper aims to make these advanced concepts accessible, even to those with a limited background in blockchain or federated learning. We provide a foundational understanding of how blockchain technology can revolutionize data integrity in decentralized systems and explain the role of oracles in maintaining model accuracy and reliability.Keywords: federated learning system, block chain, decentralized oracles, hidden markov model
Procedia PDF Downloads 641134 The Science of Health Care Delivery: Improving Patient-Centered Care through an Innovative Education Model
Authors: Alison C. Essary, Victor Trastek
Abstract:
Introduction: The current state of the health care system in the U.S. is characterized by an unprecedented number of people living with multiple chronic conditions, unsustainable rise in health care costs, inadequate access to care, and wide variation in health outcomes throughout the country. An estimated two-thirds of Americans are living with two or more chronic conditions, contributing to 75% of all health care spending. In 2013, the School for the Science of Health Care Delivery (SHCD) was charged with redesigning the health care system through education and research. Faculty in business, law, and public policy, and thought leaders in health care delivery, administration, public health and health IT created undergraduate, graduate, and executive academic programs to address this pressing need. Faculty and students work across disciplines, and with community partners and employers to improve care delivery and increase value for patients. Methods: Curricula apply content in health care administration and operations within the clinical context. Graduate modules are team-taught by faculty across academic units to model team-based practice. Seminars, team-based assignments, faculty mentoring, and applied projects are integral to student success. Cohort-driven models enhance networking and collaboration. This observational study evaluated two years of admissions data, and one year of graduate data to assess program outcomes and inform the current graduate-level curricula. Descriptive statistics includes means, percentages. Results: Fall 2013, the program received 51 applications. The mean GPA of the entering class of 37 students was 3.38. Ninety-seven percent of the fall 2013 cohort successfully completed the program (n=35). Sixty-six percent are currently employed in the health care industry (n=23). Of the remaining 12 graduates, two successfully matriculated to medical school; one works in the original field of study; four await results on the MCAT or DAT, and five were lost to follow up. Attrition of one student was attributed to non-academic reasons. Fall 2014, the program expanded to include both on-ground and online cohorts. Applications were evenly distributed between on-ground (n=70) and online (n=68). Thirty-eight students enrolled in the on-ground program. The mean GPA was 3.95. Ninety-five percent of students successfully completed the program (n=36). Thirty-six students enrolled in the online program. The mean GPA was 3.85. Graduate outcomes are pending. Discussion: Challenges include demographic variability between online and on-ground students; yet, both profiles are similar in that students intend to become change agents in the health care system. In the past two years, on-ground applications increased by 31%, persistence to graduation is > 95%, mean GPA is 3.67, graduates report admission to six U.S. medical schools, the Mayo Medical School integrates SHCD content within their curricula, and there is national interest in collaborating on industry and academic partnerships. This places SHCD at the forefront of developing innovative curricula in order to improve high-value, patient-centered care.Keywords: delivery science, education, health care delivery, high-value care, innovation in education, patient-centered
Procedia PDF Downloads 2841133 Ni-W-P Alloy Coating as an Alternate to Electroplated Hard Cr Coating
Authors: S. K. Ghosh, C. Srivastava, P. K. Limaye, V. Kain
Abstract:
Electroplated hard chromium is widely known in coatings and surface finishing, automobile and aerospace industries because of its excellent hardness, wear resistance and corrosion properties. However, its precursor, Cr+6 is highly carcinogenic in nature and a consensus has been adopted internationally to eradicate this coating technology with an alternative one. The search for alternate coatings to electroplated hard chrome is continuing worldwide. Various alloys and nanocomposites like Co-W alloys, Ni-Graphene, Ni-diamond nanocomposites etc. have already shown promising results in this regard. Basically, in this study, electroless Ni-P alloys with excellent corrosion resistance was taken as the base matrix and incorporation of tungsten as third alloying element was considered to improve the hardness and wear resistance of the resultant alloy coating. The present work is focused on the preparation of Ni–W–P coatings by electrodeposition with different content of phosphorous and its effect on the electrochemical, mechanical and tribological performances. The results were also compared with Ni-W alloys. Composition analysis by EDS showed deposition of Ni-32.85 wt% W-3.84 wt% P (designated as Ni-W-LP) and Ni-18.55 wt% W-8.73 wt% P (designated as Ni-W-HP) alloy coatings from electrolytes containing of 0.006 and 0.01M sodium hypophosphite respectively. Inhibition of tungsten deposition in the presence of phosphorous was noted. SEM investigation showed cauliflower like growth along with few microcracks. The as-deposited Ni-W-P alloy coating was amorphous in nature as confirmed by XRD investigation and step-wise crystallization was noticed upon annealing at higher temperatures. For all the coatings, the nanohardness was found to increase after heat-treatment and typical nanonahardness values obtained for 400°C annealed samples were 18.65±0.20 GPa, 20.03±0.25 GPa, and 19.17±0.25 for alloy coatings Ni-W, Ni-W-LP and Ni-W-HP respectively. Therefore, the nanohardness data show very promising results. Wear and coefficient of friction data were recorded by applying a different normal load in reciprocating motion using a ball on plate geometry. Post experiment, the wear mechanism was established by detail investigation of wear-scar morphology. Potentiodynamic measurements showed coating with a high content of phosphorous was most corrosion resistant in 3.5wt% NaCl solution.Keywords: corrosion, electrodeposition, nanohardness, Ni-W-P alloy coating
Procedia PDF Downloads 3481132 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 1071131 X-Ray Crystallographic Studies on BPSL2418 from Burkholderia pseudomallei
Authors: Mona Alharbi
Abstract:
Melioidosis has emerged as a lethal disease. Unfortunately, the molecular mechanisms of virulence and pathogenicity of Burkholderia pseudomallei remain unknown. However, proteomics research has selected putative targets in B. pseudomallei that might play roles in the B. pseudomallei virulence. BPSL 2418 putative protein has been predicted as a free methionine sulfoxide reductase and interestingly there is a link between the level of the methionine sulfoxide in pathogen tissues and its virulence. Therefore in this work, we describe the cloning expression, purification, and crystallization of BPSL 2418 and the solution of its 3D structure using X-ray crystallography. Also, we aimed to identify the substrate binding and reduced forms of the enzyme to understand the role of BPSL 2418. The gene encoding BPSL2418 from B. pseudomallei was amplified by PCR and reclone in pETBlue-1 vector and transformed into E. coli Tuner DE3 pLacI. BPSL2418 was overexpressed using E. coli Tuner DE3 pLacI and induced by 300μM IPTG for 4h at 37°C. Then BPS2418 purified to better than 95% purity. The pure BPSL2418 was crystallized with PEG 4000 and PEG 6000 as precipitants in several conditions. Diffraction data were collected to 1.2Å resolution. The crystals belonged to space group P2 21 21 with unit-cell parameters a = 42.24Å, b = 53.48Å, c = 60.54Å, α=γ=β= 90Å. The BPSL2418 binding MES was solved by molecular replacement with the known structure 3ksf using PHASER program. The structure is composed of six antiparallel β-strands and four α-helices and two loops. BPSL2418 shows high homology with the GAF domain fRMsrs enzymes which suggest that BPSL2418 might act as methionine sulfoxide reductase. The amino acids alignment between the fRmsrs including BPSL 2418 shows that the three cysteines that thought to catalyze the reduction are fully conserved. BPSL 2418 contains the three conserved cysteines (Cys⁷⁵, Cys⁸⁵ and Cys¹⁰⁹). The active site contains the six antiparallel β-strands and two loops where the disulfide bond formed between Cys⁷⁵ and Cys¹⁰⁹. X-ray structure of free methionine sulfoxide binding and native forms of BPSL2418 were solved to increase the understanding of the BPSL2418 catalytic mechanism.Keywords: X-Ray Crystallography, BPSL2418, Burkholderia pseudomallei, Melioidosis
Procedia PDF Downloads 2481130 Series Connected GaN Resonant Tunneling Diodes for Multiple-Valued Logic
Authors: Fang Liu, JunShuai Xue, JiaJia Yao, XueYan Yang, ZuMao Li, GuanLin Wu, HePeng Zhang, ZhiPeng Sun
Abstract:
III-Nitride resonant tunneling diode (RTD) is one of the most promising candidates for multiple-valued logic (MVL) elements. Here, we report a monolithic integration of GaN resonant tunneling diodes to realize multiple negative differential resistance (NDR) regions for MVL application. GaN RTDs, composed of a 2 nm quantum well embedded in two 1 nm quantum barriers, are grown by plasma-assisted molecular beam epitaxy on free-standing c-plane GaN substrates. Negative differential resistance characteristic with a peak current density of 178 kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed. Statistical properties exhibit high consistency showing a peak current density standard deviation of almost 1%, laying the foundation for the monolithic integration. After complete electrical isolation, two diodes of the designed same area are connected in series. By solving the Poisson equation and Schrodinger equation in one dimension, the energy band structure is calculated to explain the transport mechanism of the differential negative resistance phenomenon. Resonant tunneling events in a sequence of the series-connected RTD pair (SCRTD) form multiple NDR regions with nearly equal peak current, obtaining three stable operating states corresponding to ternary logic. A frequency multiplier circuit achieved using this integration is demonstrated, attesting to the robustness of this multiple peaks feature. This article presents a monolithic integration of SCRTD with multiple NDR regions driven by the resonant tunneling mechanism, which can be applied to a multiple-valued logic field, promising a fast operation speed and a great reduction of circuit complexity and demonstrating a new solution for nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, multiple-valued logic system, frequency multiplier, negative differential resistance, peak-to-valley current ratio
Procedia PDF Downloads 821129 Surface Water Flow of Urban Areas and Sustainable Urban Planning
Authors: Sheetal Sharma
Abstract:
Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.Keywords: runoff, built up, roughness, recharge, temporal changes
Procedia PDF Downloads 2781128 A Method for Solid-Liquid Separation of Cs+ from Radioactive Waste by Using Ionic Liquids and Extractants
Authors: J. W. Choi, S. Y. Cho, H. J. Lee, W. Z. Oh, S. J. Choi
Abstract:
Ionic liquids (ILs), which is alternative to conventional organic solvent, were used for extraction of Cs ions. ILs, as useful environment friendly green solvents, have been recently applied as replacement for traditional volatile organic compounds (VOCs) in liquid/liquid extraction of heavy metal ions as well as organic and inorganic species and pollutants. Thus, Ionic liquids were used for extraction of Cs ions from the liquid radioactive waste. In most cases, Cs ions present in radioactive wastes in very low concentration, approximately less than 1ppm. Therefore, unlike established extraction system the required amount of ILs as extractant is comparatively very small. This extraction method involves cation exchange mechanism in which Cs ion transfers to the organic phase and binds to one crown ether by chelation in exchange of single ILs cation, IL_cation+, transfer to the aqueous phase. In this extraction system showed solid-liquid separation in which the Ionic liquid 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonly)imide (C2mimTf2N) and the crown ether Dicyclohexano-18-crown-6 (DCH18C6) both were used here in very little amount as solvent and as extractant, respectively. 30 mM of CsNO3 was used as simulated waste solution cesium ions. Generally, in liquid-liquid extraction, the molar ratio of CE:Cs+:ILs was 1:5~10:>100, while our applied molar ratio of CE:Cs+:ILs was 1:2:1~10. The quantity of CE and Cs ions were fixed to 0.6 and 1.2 mmol, respectively. The phenomenon of precipitation showed two kinds of separation: solid-liquid separation in the ratio of 1:2:1 and 1:2:2; solid-liquid-liquid separation (3 phase) in the ratio of 1:2:5 and 1:2:10. In the last system, 3 phases were precipitate-ionic liquids-aqueous. The precipitate was verified to consist of Cs+, DCH18C6, Tf2N- based on the cation exchange mechanism. We analyzed precipitate using scanning electron microscopy with X-ray microanalysis (SEM-EDS), an elemental analyser, Fourier transform infrared spectroscopy (FT-IR) and differential scanning calorimetry (DSC). The experimental results showed an easy extraction method and confirmed the composition of solid precipitate. We also obtained information that complex formation ratio of Cs+ to DCH18C6 is 0.88:1 regardless of C2mimTf2N quantities.Keywords: extraction, precipitation, solid-liquid seperation, ionic liquid, precipitate
Procedia PDF Downloads 4231127 Microfluidic Device for Real-Time Electrical Impedance Measurements of Biological Cells
Authors: Anil Koklu, Amin Mansoorifar, Ali Beskok
Abstract:
Dielectric spectroscopy (DS) is a noninvasive, label free technique for a long term real-time measurements of the impedance spectra of biological cells. DS enables characterization of cellular dielectric properties such as membrane capacitance and cytoplasmic conductivity. We have developed a lab-on-a-chip device that uses an electro-activated microwells array for loading, DS measurements, and unloading of biological cells. We utilized from dielectrophoresis (DEP) to capture target cells inside the wells and release them after DS measurement. DEP is a label-free technique that exploits differences among dielectric properties of the particles. In detail, DEP is the motion of polarizable particles suspended in an ionic solution and subjected to a spatially non-uniform external electric field. To the best of our knowledge, this is the first microfluidic chip that combines DEP and DS to analyze biological cells using electro-activated wells. Device performance is tested using two different cell lines of prostate cancer cells (RV122, PC-3). Impedance measurements were conducted at 0.2 V in the 10 kHz to 40 MHz range with 6 s time resolution. An equivalent circuit model was developed to extract the cell membrane capacitance and cell cytoplasmic conductivity from the impedance spectra. We report the time course of the variations in dielectric properties of PC-3 and RV122 cells suspended in low conductivity medium (LCB), which enhances dielectrophoretic and impedance responses, and their response to sudden pH change from a pH of 7.3 to a pH of 5.8. It is shown that microfluidic chip allowed online measurements of dielectric properties of prostate cancer cells and the assessment of the cellular level variations under external stimuli such as different buffer conductivity and pH. Based on these data, we intend to deploy the current device for single cell measurements by fabricating separately addressable N × N electrode platforms. Such a device will allow time-dependent dielectric response measurements for individual cells with the ability of selectively releasing them using negative-DEP and pressure driven flow.Keywords: microfluidic, microfabrication, lab on a chip, AC electrokinetics, dielectric spectroscopy
Procedia PDF Downloads 1511126 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 3851125 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.Keywords: canny pruning, hand recognition, machine learning, skin tracking
Procedia PDF Downloads 1851124 Challenges of the Implementation of Real Time Online Learning in a South African Context
Authors: Thifhuriwi Emmanuel Madzunye, Patricia Harpur, Ephias Ruhode
Abstract:
A review of the pertinent literature identified a gap concerning the hindrances and opportunities accompanying the implementation of real-time online learning systems (RTOLs) in rural areas. Whilst RTOLs present a possible solution to teaching and learning issues in rural areas, little is known about the implementation of digital strategies among schools in isolated communities. This study explores associated guidelines that have the potential to inform decision-making where Internet-based education could improve educational opportunities. A systematic literature review has the potential to consolidate and focus on disparate literature served to collect interlinked data from specific sources in a structured manner. During qualitative data analysis (QDA) of selected publications via the application of a QDA tool - ATLAS.ti, the following overarching themes emerged: digital divide, educational strategy, human factors, and support. Furthermore, findings from data collection and literature review suggest that signiant factors include a lack of digital knowledge, infrastructure shortcomings such as a lack of computers, poor internet connectivity, and handicapped real-time online may limit students’ progress. The study recommends that timeous consideration should be given to the influence of the digital divide. Additionally, the evolution of educational strategy that adopts digital approaches, a focus on training of role-players and stakeholders concerning human factors, and the seeking of governmental funding and support are essential to the implementation and success of RTOLs.Keywords: communication, digital divide, digital skills, distance, educational strategy, government, ICT, infrastructures, learners, limpopo, lukalo, network, online learning systems, political-unrest, real-time, real-time online learning, real-time online learning system, pass-rate, resources, rural area, school, support, teachers, teaching and learning and training
Procedia PDF Downloads 3371123 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 90