Search results for: grid-interactive efficient buildings (GEB)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6585

Search results for: grid-interactive efficient buildings (GEB)

1845 Development of Drug Delivery Systems for Endoplasmic Reticulum Amino Peptidases Modulators Using Electrospinning

Authors: Filipa Vasconcelos

Abstract:

The administration of endoplasmic reticulum amino peptidases (ERAP1 or ERAP2) inhibitors can be used for therapeutic approaches against cancer and auto-immune diseases. However, one of the main shortcomings of drug delivery systems (DDS) is associated with the drug off-target distribution, which can lead to an increase in its side effects on the patient’s body. To overcome such limitations, the encapsulation of four representative compounds of ERAP inhibitors into Polycaprolactone (PCL), Polyvinyl-alcohol (PVA), crosslinked PVA, and PVA with nanoparticles (liposomes) electrospun fibrous meshes is proposed as a safe and controlled drug release system. The use of electrospun fibrous meshes as a DDS allows efficient solvent evaporation giving limited time to the encapsulated drug to recrystallize, continuous delivery of the drug while the fibers degrade, prevention of initial burst release (sustained release), tunable dosages, and the encapsulation of other agents. This is possible due to the fibers' small diameters and resemblance to the extracellular matrix (confirmed by scanning electron microscopy results), high specific surface area, and good mechanical strength/stability. Furthermore, release studies conducted on PCL, PVA, crosslinked PVA, and PVA with nanoparticles (liposomes) electrospun fibrous meshes with each of the ERAP compounds encapsulated demonstrated that they were capable of releasing >60%, 50%, 40%, and 45% of the total ERAP concentration, respectively. Fibrous meshes with ERAP_E compound encapsulated achieved higher released concentrations (75.65%, 62.41%, 56.05%, and 65.39%, respectively). Toxicity studies of fibrous meshes with encapsulated compounds are currently being accessed in vitro, as well as pharmacokinetics and dynamics studies. The last step includes the implantation of the drug-loaded fibrous meshes in vivo.

Keywords: drug delivery, electrospinning, ERAP inhibitors, liposomes

Procedia PDF Downloads 108
1844 Balancing Justice: A Critical Analysis of Plea Bargaining's Impact on Uganda's Criminal Justice System

Authors: Mukisa Daphine Letisha

Abstract:

Plea bargaining, a practice often associated with more developed legal systems, has emerged as a significant tool within Uganda's criminal justice system despite its absence in formal legal structures inherited from its colonial past. Initiated in 2013 with the aim of reducing case backlogs, expediting trials, and addressing prison congestion, plea bargaining reflects a pragmatic response to systemic challenges. While rooted in international statutes and domestic constitutional provisions, its implementation relies heavily on the Judicature (Plea Bargain) Rules of 2016, which outline procedural requirements and safeguards. Advocates argue that plea bargaining has yielded tangible benefits, including a reduction in case backlog and efficient allocation of resources, with notable support from judicial and prosecutorial authorities. Case examples demonstrate successful outcomes, with accused individuals benefitting from reduced sentences in exchange for guilty pleas. However, challenges persist, including procedural irregularities, inadequate statutory provisions, and concerns about coercion and imbalance of power between prosecutors and accused individuals. To enhance efficacy, recommendations focus on establishing monitoring mechanisms, stakeholder training, and public sensitization campaigns. In conclusion, while plea bargaining offers potential advantages in streamlining Uganda's criminal justice system, addressing its challenges requires careful consideration of procedural safeguards and stakeholder engagement to ensure fairness and integrity in the administration of justice.

Keywords: plea-bargaining, criminal-justice system, uganda, efficacy

Procedia PDF Downloads 63
1843 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach

Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan

Abstract:

Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.

Keywords: efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach

Procedia PDF Downloads 271
1842 Low-Cost Monitoring System for Hydroponic Urban Vertical Farms

Authors: Francesco Ruscio, Paolo Paoletti, Jens Thomas, Paul Myers, Sebastiano Fichera

Abstract:

This paper presents the development of a low-cost monitoring system for a hydroponic urban vertical farm, enabling its automation and a quantitative assessment of the farm performance. Urban farming has seen increasing interest in the last decade thanks to the development of energy efficient and affordable LED lights; however, the optimal configuration of such systems (i.e. amount of nutrients, light-on time, ambient temperature etc.) is mostly based on the farmers’ experience and empirical guidelines. Moreover, even if simple, the maintenance of such systems is labor intensive as it requires water to be topped-up periodically, mixing of the nutrients etc. To unlock the full potential of urban farming, a quantitative understanding of the role that each variable plays in the growth of the plants is needed, together with a higher degree of automation. The low-cost monitoring system proposed in this paper is a step toward filling this knowledge and technological gap, as it enables collection of sensor data related to water and air temperature, water level, humidity, pressure, light intensity, pH and electric conductivity without requiring any human intervention. More sensors and actuators can also easily be added thanks to the modular design of the proposed platform. Data can be accessed remotely via a simple web interface. The proposed platform can be used both for quantitatively optimizing the setup of the farms and for automating some of the most labor-intensive maintenance activities. Moreover, such monitoring system can also potentially be used for high-level decision making, once enough data are collected.

Keywords: automation, hydroponics, internet of things, monitoring system, urban farming

Procedia PDF Downloads 163
1841 Soybean Based Farming System Assessment in Pasuruan East Java Indonesia

Authors: Mohammad Saeri, Noor Rizkiyah, Kambang Vetrani Asie, Titin Apung Atikah

Abstract:

The study aims to assess efficient specific-location soybean farming technology assembly by assisting the farmers in applying the suggested technology. Superimposed trial was conducted to know NPK fertilizer effect toward soybean growth and yield and soybean improved variety test for the dissemination of improved variety. The assessment was conducted at the farmers group of Sumber Rejeki, Kepulungan Village, Gempol Sub-district, Pasuruan Regency as the soybean central at Pasuruan area. The number of farmers involved in the study was 38 people with 25 ha soybean area. This study was held from July to October 2012.  The recommended technology package agreed at the socialization time and used in this research were: using Argomulyo variety seeds of 40 kg/ha, planting by drilling, planting by distance of 40x10 cm, deciding the seeds amount of 2-3 seeds per hole, and giving fertilization based on recommendation of East Java AIAT of 50 kg Urea, 100 kg SP-36 and 50 kg KCl.  Farmers around the research location were used as control group. Assessment on soybean farming system was considered effective because it could increase the production up to 38%. The farming analysis showed that the result collaborator farmers gained were positively higher than non-collaborator farmers with RC ratio of 2.03 and 1.54, respectively. Argomulyo variety has the prospect to be developed due to the high yield of about 2 tons/ha and the larger seeds. The NPK fertilization test at the soybean plants showed that the fertilization had minor effect on the yield.

Keywords: farming system, soybean, variety, location specific

Procedia PDF Downloads 182
1840 Production of Organic Solvent Tolerant Hydrolytic Enzymes (Amylase and Protease) by Bacteria Isolated from Soil of a Dairy Farm

Authors: Alok Kumar, Hari Ram, Lebin Thomas, Ved Pal Singh

Abstract:

Organic solvent tolerant amylases and proteases of microbial origin are in great demand for their application in transglycosylation of water-insoluble flavanoids and in peptide synthesizing reaction in organic media. Most of the amylases and proteases are unstable in presence of organic solvent. In the present work two different bacterial strains M-11 and VP-07 were isolated from the soil sample of a dairy farm in Delhi, India, for the efficient production of extracellular amylase and protease through their screening on starch agar (SA) and skimmed milk agar (SMA) plates, respectively. Both the strains (M-11 and VP-07) were identified based on morphological, biochemical and 16S rRNA gene sequencing methods. After analysis through Ez-Taxon software, the strains M-11 and VP-07 were found to have maximum pairwise similarity of 98.63% and 100% with Bacillus subtilis subsp. inaquosorum BGSC 3A28 and Bacillus anthracis ATCC 14578 and were therefore identified as Bacillus sp. UKS1 and Bacillus sp. UKS2, respectively. Time course study of enzyme activity and bacterial growth has shown that both strains exhibited typical sigmoid growth behavior and maximum production of amylase (180 U/ml) and protease (78 U/ml) by these strains (UKS1 and UKS2) was commenced during stationary phase of growth at 24 and 20 h, respectively. Thereafter, both amylase and protease were tested for their tolerance towards organic solvents and were found to be active as well stable in p-xylene (130% and 115%), chloroform (110% and 112%), isooctane (119% and 107%), benzene (121% and 104%), n-hexane (116% and 103%) and toluene (112% and 101%, respectively). Owing to such properties, these enzymes can be exploited for their potential application in industries for organic synthesis.

Keywords: amylase, enzyme activity, industrial applications, organic solvent tolerant, protease

Procedia PDF Downloads 348
1839 Influence of Random Fibre Packing on the Compressive Strength of Fibre Reinforced Plastic

Authors: Y. Wang, S. Zhang, X. Chen

Abstract:

The longitudinal compressive strength of fibre reinforced plastic (FRP) possess a large stochastic variability, which limits efficient application of composite structures. This study aims to address how the random fibre packing affects the uncertainty of FRP compressive strength. An novel approach is proposed to generate random fibre packing status by a combination of Latin hypercube sampling and random sequential expansion. 3D nonlinear finite element model is built which incorporates both the matrix plasticity and fibre geometrical instability. The matrix is modeled by isotropic ideal elasto-plastic solid elements, and the fibres are modeled by linear-elastic rebar elements. Composite with a series of different nominal fibre volume fractions are studied. Premature fibre waviness at different magnitude and direction is introduced in the finite element model. Compressive tests on uni-directional CFRP (carbon fibre reinforced plastic) are conducted following the ASTM D6641. By a comparison of 3D FE models and compressive tests, it is clearly shown that the stochastic variation of compressive strength is partly caused by the random fibre packing, and normal or lognormal distribution tends to be a good fit the probabilistic compressive strength. Furthermore, it is also observed that different random fibre packing could trigger two different fibre micro-buckling modes while subjected to longitudinal compression: out-of-plane buckling and twisted buckling. The out-of-plane buckling mode results much larger compressive strength, and this is the major reason why the random fibre packing results a large uncertainty in the FRP compressive strength. This study would contribute to new approaches to the quality control of FRP considering higher compressive strength or lower uncertainty.

Keywords: compressive strength, FRP, micro-buckling, random fibre packing

Procedia PDF Downloads 275
1838 Design and Synthesis of Copper-Zeolite Composite for Antimicrobial Activity and Heavy Metal Removal From Waste Water

Authors: Feleke Terefe Fanta

Abstract:

Background: The existence of heavy metals and coliform bacteria contaminants in aquatic system of Akaki river basin, a sub city of Addis Ababa, Ethiopia has become a public concern as human population increases and land development continues. Hence, it is the right time to design treatment technologies that can handle multiple pollutants. Results: In this study, we prepared a synthetic zeolites and copper doped zeolite composite adsorbents as cost effective and simple approach to simultaneously remove heavy metals and total coliforms from wastewater of Akaki river. The synthesized copper–zeolite X composite was obtained by ion exchange method of copper ions into zeolites frameworks. Iodine test, XRD, FTIR and autosorb IQ automated gas sorption analyzer were used to characterize the adsorbents. The mean concentrations of Cd, Cr, and Pb in untreated sample were 0.795, 0.654 and 0.7025 mg/L respectively. These concentrations decreased to Cd (0.005 mg/L), Cr (0.052 mg/L) and Pb (bellow detection limit, BDL) for sample treated with bare zeolite X while a further decrease in concentration of Cd (0.005 mg/L), Cr (BDL) and Pb (BDL) was observed for the sample treated with copper–zeolite composite. Zeolite X and copper-modified zeolite X showed complete elimination of total coliforms after 90 and 50 min contact time respectively. Conclusion: The results obtained in this study showed high antimicrobial disinfection and heavy metal removal efficiencies of the synthesized adsorbents. Furthermore, these sorbents are efficient in significantly reducing physical parameters such as electrical conductivity, turbidity, BOD and COD.

Keywords: WASTE WATER, COPPER DOPED ZEOITE X, ADSORPITION, HEAVY METAL, DISINFECTION, AKAKI RIVER

Procedia PDF Downloads 75
1837 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 522
1836 Collagen Silver Lipid Nanoparticles as Matrix and Fillers for Cosmeceuticals: An In-Vitro and In-Vivo Study

Authors: Kumari Kajal, Muthu Kumar Sampath, Hare Ram Singh

Abstract:

In this context, the formulation and characterization of collagen silver lipid nanoparticles (CSLNs) were studied for their capacity to serve as fillers/matrix materials used in cosmeceutical applications. The CSLNs were prepared following a series of studies, such as X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM) coupled with energy-dispersive X-ray spectroscopy (EDS), Fourier-transform infrared spectroscopy FT-IR; thermogravimetric analysis (TGA); and differential scanning calorimetry (DSC). The studies confirmed the structural integrity of nanoparticles, their cargo and thermal stability. The biological functionality of CSLNs was studied by carrying out in vitro & in vivo studies. The antibacterial effect, hemocompatibility and anti-inflammatory characteristics of these fibers were systematically investigated. The toxicological assays included oral toxicity in mice and aquatic life tests with the fish Danio rerio model. The morphology of the nanoparticles was confirmed using high-resolution transmission electron microscopy (HR-TEM). The report found that CSLNs had strong antimicrobial effects, unmatched hemocompatibility, and low or absent inflammatory reactions, which makes them perfect candidates for cosmeceutical applications. The toxicological evaluations evinced a good safety record without any significant adverse effects in both murine and Danio rerio models. This research reveals the efficient way of CSLNs to the efficacy and safety of dermaceuticals.

Keywords: collagen silver lipid nanoparticles (CSLNs), cosmeceuticals, antimicrobial activity, hemocompatibility, in vitro assessment, in vivo assessment.

Procedia PDF Downloads 22
1835 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms

Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.

Abstract:

Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.

Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment

Procedia PDF Downloads 404
1834 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium

Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab

Abstract:

The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.

Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence

Procedia PDF Downloads 436
1833 Selectivity Mechanism of Cobalt Precipitation by an Imidazole Linker From an Old Battery Solution

Authors: Anna-Caroline Lavergne-Bril, Jean-François Colin, David Peralta, Pascale Maldivi

Abstract:

Cobalt is a critical material, widely used in Li-ion batteries. Due to the planned electrification of European vehicles, cobalt needs are expending – and resources are limited. To meet the needs in cobalt to come, it is necessary to develop new efficient ways to recycle cobalt. One of the biggest sources comes from old electrical vehicles batteries (batteries sold in 2019: 500 000 tons of waste to be). A closed loop process of cobalt recycling has been developed and this presentation aims to present the selectivity mechanism of cobalt over manganese and nickel in solution. Cobalt precipitation as a ZIF material (Zeolitic Imidazolate framework) from a starting solution composed of equimolar nickel, manganese and cobalt is studied. A 2-MeIm (2-methylimidazole) linker is introduced in a multimetallic Ni, Mn, Co solution and the resulting ZIF-67 is 100% pure Co among its metallic centers. Selectivity of Co over Ni is experimentally studied and DFT modelisation calculation are conducted to understand the geometry of ligand-metal-solvent complexes in solution. Selectivity of Co over Mn is experimentally studied, and DFT modelisation calcucation are conducted to understand the link between pKa of the ligand and precipitration of Mn impurities within the final material. Those calculation open the way to other ligand being used in the same process, with more efficiency. Experimental material are synthetized from bimetallic (Ni²⁺/Co²⁺, Mn²⁺/Co²⁺, Mn²⁺/Ni²⁺) solutions. Their crystallographic structure is analysed by XRD diffraction (Brüker AXS D8 diffractometer, Cu anticathode). Morphology is studied by scanning electron microscopy, using a LEO 1530 FE-SEM microscope. The chemical analysis is performed by using ICP-OES (Agilent Technologies 700 series ICP-OES). Modelisation calculation are DFT calculation (density functional theory), using B3LYP, conducted with Orca 4.2.

Keywords: MOFs, ZIFs, recycling, closed-loop, cobalt, li-ion batteries

Procedia PDF Downloads 142
1832 Hydrothermal Synthesis of V₂O₅-Carbon Nanotube Composite for Supercapacitor Application

Authors: Mamta Bulla, Vinay Kumar

Abstract:

The transition to renewable energy sources is essential due to the finite limitations of conventional fossil fuels, which contribute significantly to environmental pollution and greenhouse gas emissions. Traditional energy storage solutions, such as batteries and capacitors, are also hindered by limitations, particularly in capacity, cycle life, and energy density. Conventional supercapacitors, while able to deliver high power, often suffer from low energy density, limiting their efficiency in storing and providing renewable energy consistently. Renewable energy sources, such as solar and wind, produce power intermittently, so efficient energy storage solutions are required to manage this variability. Advanced materials, particularly those with high capacity and long cycle life, are critical to developing supercapacitors capable of effectively storing renewable energy. Among various electrode materials, vanadium pentoxide (V₂O₅) offers high theoretical capacitance, but its poor conductivity and cycling stability limit practical applications. This study explores the hydrothermal synthesis of a V₂O₅-carbon nanotube (CNT) composite to overcome these drawbacks, combining the high capacitance of V₂O₅ with the exceptional conductivity and mechanical stability of CNTs. The resulting V₂O₅-CNT composite demonstrates enhanced electrochemical performance, showing high specific capacitance of 890 F g⁻¹ at 0.1 A g⁻¹ current density, excellent rate capability, and improved cycling stability, making it a promising candidate for next-generation supercapacitors, with significant improvements in energy storage efficiency and durability.

Keywords: cyclability, energy density, nanocomposite, renewable energy, supercapacitor

Procedia PDF Downloads 17
1831 Evaluation of Gene Expression after in Vitro Differentiation of Human Bone Marrow-Derived Stem Cells to Insulin-Producing Cells

Authors: Mahmoud M. Zakaria, Omnia F. Elmoursi, Mahmoud M. Gabr, Camelia A. AbdelMalak, Mohamed A. Ghoneim

Abstract:

Many protocols were publicized for differentiation of human mesenchymal stem cells (MSCS) into insulin-producing cells (IPCs) in order to excrete insulin hormone ingoing to treat diabetes disease. Our aim is to evaluate relative gene expression for each independent protocol. Human bone marrow cells were derived from three volunteers that suffer diabetes disease. After expansion of mesenchymal stem cells, differentiation of these cells was done by three different protocols (the one-step protocol was used conophylline protein, the two steps protocol was depending on trichostatin-A, and the three-step protocol was started by beta-mercaptoethanol). Evaluation of gene expression was carried out by real-time PCR: Pancreatic endocrine genes, transcription factors, glucose transporter, precursor markers, pancreatic enzymes, proteolytic cleavage, extracellular matrix and cell surface protein. Quantitation of insulin secretion was detected by immunofluorescence technique in 24-well plate. Most of the genes studied were up-regulated in the in vitro differentiated cells, and also insulin production was observed in the three independent protocols. There were some slight increases in expression of endocrine mRNA of two-step protocol and its insulin production. So, the two-step protocol was showed a more efficient in expressing of pancreatic endocrine genes and its insulin production than the other two protocols.

Keywords: mesenchymal stem cells, insulin producing cells, conophylline protein, trichostatin-A, beta-mercaptoethanol, gene expression, immunofluorescence technique

Procedia PDF Downloads 221
1830 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 676
1829 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks

Authors: Alaa Allakany, Koji Okamura

Abstract:

Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).

Keywords: multicast tree, software define networks, tabu search, OpenFlow

Procedia PDF Downloads 266
1828 A Facile Nanocomposite of Graphene Oxide Reinforced Chitosan/Poly-Nitroaniline Polymer as a Highly Efficient Adsorbent for Extracting Polycyclic Aromatic Hydrocarbons from Tea Samples

Authors: Adel M. Al-Shutairi, Ahmed H. Al-Zahrani

Abstract:

Tea is a popular beverage drunk by millions of people throughout the globe. Tea has considerable health advantages, in-cluding antioxidant, antibacterial, antiviral, chemopreventive, and anticarcinogenic properties. As a result of environmental pollution (atmospheric deposition) and the production process, tealeaves may also include a variety of dangerous substances, such as polycyclic aromatic hydrocarbons (PAHs). In this study, graphene oxide reinforced chitosan/poly-nitroaniline polymer was prepared to develop a sensitive and reliable solid phase extraction method (SPE) for extraction of PAH7 in tea samples, followed by high-performance liquid chromatography- fluorescence detection. The prepared adsorbent was validated in terms of linearity, the limit of detection, the limit of quantification, recovery (%), accuracy (%), and precision (%) for the determination of the PAH7 (benzo[a]pyrene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[b]fluoranthene, Dibenzo[a,h]anthracene and Benzo[g,h,i]perylene) in tea samples. The concentration was determined in two types of tea commercially available in Saudi Arabia, including black tea and green tea. The maximum mean of Σ7PAHs in black tea samples was 68.23 ± 0.02 ug kg-1 and 26.68 ± 0.01 ug kg-1 in green tea samples. The minimum mean of Σ7PAHs in black tea samples was 37.93 ± 0.01 ug kg-1 and 15.26 ± 0.01 ug kg-1 in green tea samples. The mean value of benzo[a]pyrene in black tea samples ranged from 6.85 to 12.17 ug kg-1, where two samples exceeded the standard level (10 ug kg-1) established by the European Union (UE), while in green tea ranged from 1.78 to 2.81 ug kg-1. Low levels of Σ7PAHs in green tea samples were detected in comparison with black tea samples.

Keywords: polycyclic aromatic hydrocarbons, CS, PNA and GO, black/green tea, solid phase extraction, Saudi Arabia

Procedia PDF Downloads 101
1827 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 128
1826 A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation

Authors: Yoshifumi Hino

Abstract:

An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.

Keywords: cost observation, efficiency, folk theorem, prisoner's dilemma, private monitoring, repeated games.

Procedia PDF Downloads 244
1825 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State

Authors: Valentina Bonetti

Abstract:

Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.

Keywords: exergy, reference state, dynamic, building

Procedia PDF Downloads 231
1824 Role of IT Systems in Corporate Recruitment: Challenges and Constraints

Authors: Brahim Bellali, Fatima Bellali

Abstract:

The integration of information technology systems (ITS) into a company's human resources processes seems to be the appropriate solution to the problem of evolving and adapting its human resources management practices in order to be both more strategic and more efficient in terms of costs and service quality. In this context, the aim of this work is to study the impact of information technology systems (ITS) on the recruitment process. In this study, we targeted candidates who had recruited using IT tools. The target population consists of 34 candidates based in Casablanca, Morocco. In order to collect the data, a questionnaire had to be drawn up. The survey is based on a data sheet and a questionnaire that is divided into several sections to make it more structured and comprehensible. The results show that the majority of respondents say that companies are making greater use of online CV libraries and social networks as digital solutions during the recruitment process. The results also show that 50% of candidates say that the use of digital tools by companies would not slow them down when applying for a job and that these IT tools improve manual recruitment processes, while 44.1% think that they facilitate recruitment without any human intervention. The majority of respondents (52.9%) think that social networks are the digital solutions most often used by recruiters in the sourcing phase. The constraints of digital recruitment encountered are the dehumanization of human resources (44.1%) and the limited interaction during remote interviews (44.1%), which leaves no room for informal exchanges. Digital recruitment can be a highly effective strategy for finding qualified candidates in a variety of fields. Here are a few recommendations for optimizing your digital recruitment process: (1) Use online recruitment platforms: LinkedIn, Twitter, and Facebook ; (2) Use applicant tracking systems (ATS) ; (3) Develop a content marketing strategy.

Keywords: IT systems, recruitment, challenges, constraints

Procedia PDF Downloads 43
1823 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 45
1822 Urban Sprawl Analysis in the City of Thiruvananthapuram and a Framework Formulation to Combat it

Authors: Sandeep J. Kumar

Abstract:

Urbanisation is considered as the primary driver of land use and land cover change that has direct link to population and economic growth. In India, as well as in other developing countries, cities are urbanizing at an alarming rate. This unprecedented and uncontrolled urbanisation can result in urban sprawl. Due to a number of factors, urban sprawl is recognised to be a result of poor planning, inadequate policies, and poor governance. Urban sprawl may be seen as posing a threat to the development of sustainable cities. Hence, it is very essential to manage this. Planning for predicted future growth is critical to avoid the negative effects of urban growth at the local and regional levels. Thiruvananthapuram being the capital city of Kerala is a city of economic success, challenges, and opportunities. Urbanization trends in the city have paved way for Urban Sprawl. This thesis aims to formulate a framework to combat the emerging urban sprawl in the city of Thiruvananthapuram. For that, the first step was to quantify trends of urban growth in Thiruvananthapuram city using Geographical Information System(GIS) and remote sensing techniques. The technique and results obtained in the study are extremely valuable in analysing the land use changes. Secondly, these change in the trends were analysed through some of the critical factors that helped the study to understand the underlying issues of the existing city structure that has resulted in urban sprawl. Anticipating development trends can modify the current order. This can be productively resolved using regional and municipal planning and management strategies. Hence efficient strategies to curb the sprawl in Thiruvananthapuram city have been formulated in this study that can be considered as recommendations for future planning.

Keywords: urbanisation, urban sprawl, geographical information system(GIS), thiruvananthapuram

Procedia PDF Downloads 110
1821 A Parking Demand Forecasting Method for Making Parking Policy in the Center of Kabul City

Authors: Roien Qiam, Shoshi Mizokami

Abstract:

Parking demand in the Central Business District (CBD) has enlarged with the increase of the number of private vehicles due to rapid economic growth, lack of an efficient public transport and traffic management system. This has resulted in low mobility, poor accessibility, serious congestion, high rates of traffic accident fatalities and injuries and air pollution, mainly because people have to drive slowly around to find a vacant spot. With parking pricing and enforcement policy, considerable advancement could be found, and on-street parking spaces could be managed efficiently and effectively. To evaluate parking demand and making parking policy, it is required to understand the current parking condition and driver’s behavior, understand how drivers choose their parking type and location as well as their behavior toward finding a vacant parking spot under parking charges and search times. This study illustrates the result from an observational, revealed and stated preference surveys and experiment. Attained data shows that there is a gap between supply and demand in parking and it has maximized. For the modeling of the parking decision, a choice model was constructed based on discrete choice modeling theory and multinomial logit model estimated by using SP survey data; the model represents the choice of an alternative among different alternatives which are priced on-street, off-street, and illegal parking. Individuals choose a parking type based on their preference concerning parking charges, searching times, access times and waiting times. The parking assignment model was obtained directly from behavioral model and is used in parking simulation. The study concludes with an evaluation of parking policy.

Keywords: CBD, parking demand forecast, parking policy, parking choice model

Procedia PDF Downloads 199
1820 Anthelminthic Effect of Clitoria Ternatea on Paramphistomum Cervi in Buffalo (Bubalus Bubalis) of Udaipur, Rajasthan, India

Authors: Bhanupriya Sanger, Kiran Roat, Gayatri Swarnakar

Abstract:

Helminths including Paramphistomum Cervi (P. cervi) are a major cause of reduced production in livestock or domestic ruminant. Rajasthan is the largest state of India having a maximum number of livestock. The economy of rural people largely depends on livestock such as cow, buffalo, goat and sheep. The prevalence of P. cervi helminth parasite is extremely high in buffalo (Bubalus bubalis) of Udaipur, which causes the disease paramphistomiasis. This disease mainly affects milk, meat, wool production and loss of life of buffalo. Chemotherapy is the only efficient and effective tool to cure and control the helminth P. cervi infection, as efficacious vaccines against helminth have not been developed so far. Various veterinary drugs like Albendazole have been used as the standard drug for eliminating P. cervi from buffalo, but these drugs are unaffordable and inaccessible for poor livestock farmers. The fruits, leaves and seeds of Clitoria ternatea Linn. are known for their ethno-medicinal value and commonly known as “Aprajita” in India. Seed extract of Clitoria ternatea found to have a significant anthelmintic action against Paramphistomum cervi at the dose of 35 mg/ml. The tegument of treated P. cervi was compared with controlled parasites by light microscopy. Treated P. cervi showed extensive distortion and destruction of the tegument including ruptured parenchymal cells, disruption of musculature cells, swelling and vacuolization in tegumental and sub tegumental cells. As a result, it can be concluded that the seeds of Clitoria ternatea can be used as the anthelmintic agent. Key words: Paramphistomiasis, Buffalo, Alcoholic extract, Paramphistomum cervi, Clitoria ternatea.

Keywords: buffalo, Clitoria ternatea, Paramphistomiasis, Paramphistomum cervi

Procedia PDF Downloads 232
1819 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction

Authors: Maria Angela Bedini, Fabio Bronzini

Abstract:

The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.

Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy

Procedia PDF Downloads 129
1818 Real-Time Land Use and Land Information System in Homagama Divisional Secretariat Division

Authors: Kumara Jayapathma J. H. M. S. S., Dampegama S. D. P. J.

Abstract:

Lands are valuable & limited resource which constantly changes with the growth of the population. An efficient and good land management system is essential to avoid conflicts associated with lands. This paper aims to design the prototype model of a Mobile GIS Land use and Land Information System in real-time. Homagama Divisional Secretariat Division situated in the western province of Sri Lanka was selected as the study area. The prototype model was developed after reviewing related literature. The methodology was consisted of designing and modeling the prototype model into an application running on a mobile platform. The system architecture mainly consists of a Google mapping app for real-time updates with firebase support tools. Thereby, the method of implementation consists of front-end and back-end components. Software tools used in designing applications are Android Studio with JAVA based on GeoJSON File structure. Android Studio with JAVA in GeoJSON File Synchronize to Firebase was found to be the perfect mobile solution for continuously updating Land use and Land Information System (LIS) in real-time in the present scenario. The mobile-based land use and LIS developed in this study are multiple user applications catering to different hierarchy levels such as basic users, supervisory managers, and database administrators. The benefits of this mobile mapping application will help public sector field officers with non-GIS expertise to overcome the land use planning challenges with land use updated in real-time.

Keywords: Android, Firebase, GeoJSON, GIS, JAVA, JSON, LIS, Mobile GIS, real-time, REST API

Procedia PDF Downloads 235
1817 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 472
1816 Removal of Diesel by Soil Washing Technologies Using a Non-Ionic Surfactant

Authors: Carolina Guatemala, Josefina Barrera

Abstract:

A large number of soils highly polluted with recalcitrant hydrocarbons and the limitation of the current bioremediation methods continue being the drawback for an efficient recuperation of these under safe conditions. In this regard, soil washing by degradable surfactants is an alternative option knowing the capacity of surfactants to desorb oily organic compounds. The aim of this study was the establishment of the washing conditions of a soil polluted with diesel, using a nonionic surfactant. A soil polluted with diesel was used. This was collected near to a polluted railway station zone. The soil was dried at room temperature and sieved to a mesh size 10 for its physicochemical and biological characterization. Washing of the polluted soil was performed with surfactant solutions in a 1:5 ratio (5g of soil per 25 mL of the surfactant solution). This was carried out at 28±1 °C and 150 rpm for 72 hours. The factors tested were the Tween 80 surfactant concentration (1, 2, 5 and 10%) and the treatment time. Residual diesel concentration was determined every 24 h. The soil was of a sandy loam texture with a low concentration of organic matter (3.68%) and conductivity (0.016 dS.m- 1). The soil had a pH of 7.63 which was slightly alkaline and a Total Petroleum Hydrocarbon content (TPH) of 11,600 ± 1058.38 mg/kg. The high TPH content could explain the low microbial count of 1.1105 determined as UFC per gram of dried soil. Within the range of the surfactant concentration tested for washing the polluted soil under study, TPH removal increased proportionally with the surfactant concentration. 5080.8 ± 422.2 ppm (43.8 ± 3.64 %) was the maximal concentration of TPH removed after 72 h of contact with surfactant pollution at 10%. Despite the high percentage of hydrocarbons removed, it is assumed that a higher concentration of these could be removed if the washing process is extended or is carried out by stages. Soil washing through the use of surfactants as a desorbing agent was found to be a viable and effective technology for the rapid recovery of soils highly polluted with recalcitrant hydrocarbons.

Keywords: diesel, hydrocarbons, soil washing, tween 80

Procedia PDF Downloads 148