Search results for: conventional computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4575

Search results for: conventional computing

3675 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 158
3674 Adsorption of 17a-Ethinylestradiol on Activated Carbon Based on Sewage Sludge in Aqueous Medium

Authors: Karoline Reis de Sena

Abstract:

Endocrine disruptors are unregulated or not fully regulated compounds, even in the most developed countries, and which can be a danger to the environment and human health. They pass untreated through the secondary stage of conventional wastewater treatment plants, then the effluent from the wastewater treatment plants is discharged into the rivers, upstream and downstream from the drinking water treatment plants that use the same river water as the tributary. Long-term consumption of drinking water containing low concentrations of these compounds can cause health problems; these are persistent in nature and difficult to remove. In this way, research on emerging pollutants is expanding and is fueled by progress in finding the appropriate method for treating wastewater. Adsorption is the most common separation process, it is a simple and low-cost operation, but it is not eco-efficient. Concomitant to this, biosorption arises, which is a subcategory of adsorption where the biosorbent is biomass and which presents numerous advantages when compared to conventional treatment methods, such as low cost, high efficiency, minimization of the use of chemicals, absence of need for additional nutrients, biosorbent regeneration capacity and the biomass used in the production of biosorbents are found in abundance in nature. Thus, the use of alternative materials, such as sewage sludge, for the synthesis of adsorbents has proved to be an economically viable alternative, together with the importance of valuing the generated by-product flows, as well as managing the problem of their correct disposal. In this work, an alternative for the management of sewage sludge is proposed, transforming it into activated carbon and using it in the adsorption process of 17a-ethinylestradiol.

Keywords: 17α-ethinylestradiol, adsorption, activated carbon, sewage sludge, micropollutants

Procedia PDF Downloads 95
3673 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 382
3672 The Relationship between Environmental Factors and Purchasing Decisions in the Residential Market in Sweden

Authors: Agnieszka Zalejska-Jonsson

Abstract:

The Swedish Green Building Council (SGBC) was established in 2009. Since then, over 1000 buildings have been certified, of which approximately 600 are newly produced and 340 are residential buildings. During that time, approximately 2000 apartment buildings have been built in Sweden. This means that over a five- year period 17% of residential buildings have been certified according to the environmental building scheme. The certification of the building is not a guarantee of environmental progress but it gives us an indication of the extent of the progress. The overarching aim of this study is to investigate the factors behind the relatively slow evolution of the green residential housing market in Sweden. The intention is to examine stated willingness to pay (WTP) for green and low energy apartments, and to explore which factors have a significant effect on stated WTP among apartment owners. A green building was defined as a building certified according to the environmental scheme and a low energy building as a building designed and constructed with high energy efficiency goals. Data for this study were collected through a survey conducted among occupants of comparable apartment buildings: two green and one conventional. The total number of received responses was 429: green A (N=160), response rate 42%; green B (N=138) response rate 35%, and conventional (N=131) response rate 43%. The study applied a quasi-experimental method. Survey responses regarding factors affecting purchase of apartment, stated WTP and environmental literacy have been analysed using descriptive statistics, the Mann–Whitney (rank sum) test and logistic models. Comments received from respondents have been used for further interpretation of results. Results indicate that environmental education has a significant effect on stated WTP. Occupants who declared higher WTP showed a higher level of environmental literacy and indicated that energy efficiency was one of the important factors that affected their decision to buy an apartment. Generally, the respondents were more likely to pay more for low energy buildings than for green buildings. This is to a great extent a consequence of rational customer behaviour and difficulty in apprehending the meaning of green building certification. The analysis shows that people living in green buildings indicate higher WTP for both green and low energy buildings, the difference being statistically significant. It is concluded that growth in the green housing market in Sweden might be achieved if policymakers and developers engage in active education in the environmental labelling system. The demand for green buildings is more likely to increase when the difference between green and conventional buildings is easily understood and information is not only delivered by the estate agent, but is part of an environmental education programme.

Keywords: consumer, environmental education, housing market, stated WTP, Sweden

Procedia PDF Downloads 241
3671 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 104
3670 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 82
3669 Investigation and Estimation of State of Health of Battery Pack in Battery Electric Vehicles-Online Battery Characterization

Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas Weyh

Abstract:

The tendency to use the Battery-Electric vehicle (BEV) for the low and medium driving range or even high driving range has been growing more and more. As a result, higher safety, reliability, and durability of the battery pack as a component of electric vehicles, which has a great share of cost and weight of the final product, are the topics to be considered and investigated. Battery aging can be considered as the predominant factor regarding the reliability and durability of BEV. To better understand the aging process, offline battery characterization has been widely used, which is time-consuming and needs very expensive infrastructures. This paper presents the substitute method for the conventional battery characterization methods, which is based on battery Modular Multilevel Management (BM3). According to this Topology, the battery cells can be drained and charged concerning their capacity, which allows varying battery pack structures. Due to the integration of the power electronics, the output voltage of the battery pack is no longer fixed but can be dynamically adjusted in small steps. In other words, each cell can have three different states, namely series, parallel, and bypass in connection with the neighbor cells. With the help of MATLAB/Simulink and by using the BM3 modules, the battery string model is created. This model allows us to switch two cells with the different SoC as parallel, which results in the internal balancing of the cells. But if the parallel switching lasts just for a couple of ms, we can have a perturbation pulse which can stimulate the cells out of the relaxation phase. With the help of modeling the voltage response pulse of the battery, it would be possible to characterize the cell. The Online EIS method, which is discussed in this paper, can be a robust substitute for the conventional battery characterization methods.

Keywords: battery characterization, SoH estimation, RLS, BEV

Procedia PDF Downloads 149
3668 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 154
3667 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities

Authors: Salman Naseer

Abstract:

One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.

Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission

Procedia PDF Downloads 142
3666 Prototype Development of Knitted Buoyant Swimming Vest for Children

Authors: Nga-Wun Li, Chu-Po Ho, Kit-Lun Yick, Jin-Yun Zhou

Abstract:

The use of buoyant vests incorporated with swimsuits can develop children’s confidence in the water, particularly for novice swimmers. Consequently, parents intend to purchase buoyant swimming vests for the children to reduce their anxiety to water. Although the conventional buoyant swimming vests can provide the buoyant function to the wearer, their bulkiness and hardness make children feel uncomfortable and not willing to wear. This study aimed to apply inlay knitting technology to design new functional buoyant swimming vests for children. This prototype involved a shell and a buoyant knitted layer, which is the main media to provide buoyancy. Polypropylene yarn and 6.4 mm of Expandable Polyethylene (EPE) foam were fabricated in Full needle stitch with inlay knitting technology and were then linked by sewing to form the buoyant layer. The shell of the knitted buoyant vest was made of Polypropylene circular knitted fabric. The structure of knitted fabrics of the buoyant swimsuit makes them inherently stretchable, and the arrangement of the inlaid material was designed based on the body movement that can improve the ease with which the swimmer moves. Further, the shoulder seam is designed at the back to minimize the irritation of the wearer. Apart from maintaining the buoyant function to them, this prototype shows its contribution in reducing bulkiness and improving softness to the conventional buoyant swimming vest by taking the advantages of a knitted garment. The results in this study are significant to the development of the buoyant swimming vest for both the textile and the fast-growing sportswear industry.

Keywords: knitting technology, buoyancy, inlay, swimming vest, functional garment

Procedia PDF Downloads 112
3665 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 284
3664 Re-Framing Resilience Turn in Risk and Management with Anti-Positivistic Perspective of Holling's Early Work

Authors: Jose CanIzares

Abstract:

In the last decades, resilience has received much attention in relation to understanding and managing new forms of risk, especially in the context of urban adaptation to climate change. There are abundant concerns, however, on how to best interpret resilience and related ideas, and on whether they can guide ethically appropriate risk-related or adaptation efforts. Narrative creation and framing are critical steps in shaping public discussion and policy in large-scale interventions, since they favor or inhibit early decision and interpretation habits, which can be morally sensitive and then become persistent on time. This article adds to such framing process by contesting a conventional narrative on resilience and offering an alternative one. Conventionally, present ideas on resilience are traced to the work of ecologist C. S. Holling, especially to his article Resilience and Stability in Ecosystems. This article is usually portrayed as a contribution of complex systems thinking to theoretical ecology, where Holling appeals to resilience in order to challenge received views on ecosystem stability and the diversity-stability hypothesis. In this regard, resilience is construed as a “purely scientific”, precise and descriptive concept, denoting a complex property that allows ecosystems to persist, or to maintain functions, after disturbance. Yet, these formal features of resilience supposedly changed with Holling’s later work in the 90s, where, it is argued, Holling begun to use resilience as a more pragmatic “boundary term”, aimed at unifying transdisciplinary research about risks, ecological or otherwise, and at articulating public debate and governance strategies on the issue. In the conventional story, increased vagueness and degrees of normativity are the price to pay for this conceptual shift, which has made the term more widely usable, but also incompatible with scientific purposes and morally problematic (if not completely objectionable). This paper builds on a detailed analysis of Holling’s early work to propose an alternative narrative. The study will show that the “complexity turn” has often entangled theoretical and pragmatic aims. Accordingly, Holling’s primary aim was to fight what he termed “pathologies of natural resource management” or “pathologies of command and control management”, and so, the terms of his reform of ecosystem science are partly subordinate to the details of his proposal for reforming the management sciences. As regards resilience, Holling used it as a polysemous, ambiguous and normative term: sometimes, as an instrumental value that is closely related to various stability concepts; other times, and more crucially, as an intrinsic value and a tool for attacking efficiency and instrumentalism in management. This narrative reveals the limitations of its conventional alternative and has several practical advantages. It captures well the structure and purposes of Holling’s project, and the various roles of resilience in it. It helps to link Holling’s early work with other philosophical and ideological shifts at work in the 70s. It highlights the currency of Holling’s early work for present research and action in fields such as risk and climate adaptation. And it draws attention to morally relevant aspects of resilience that the conventional narrative neglects.

Keywords: resilience, complexity turn, risk management, positivistic, framing

Procedia PDF Downloads 164
3663 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm

Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio

Abstract:

The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.

Keywords: algorithm, CoAP, DoS, IoT, machine learning

Procedia PDF Downloads 80
3662 Impacts of Sociological Dynamics on Entomophagy Practice and Food Security in Nigeria

Authors: O. B. Oriolowo, O. J. John

Abstract:

Empirical findings have shown insects to be nutritious and good source of food for man. However, human food preferences are not only determined by nutritional values of food consumed but, more importantly, by sociology and economic pressure. This study examined the interrelation between science and sociology in sustaining the acceptance of entomophagy among college students to combat food insecurity. A twenty items five Likert scale, College Students Entomophagy Questionnaire (CSEQ), was used to elucidate information from the respondents. The reliability coefficient was obtained to be 0.91 using Spearman-Brown Prophecy formula. Three research questions and three hypotheses were raised. Also, quantitative nutritional analysis of few insects and some established conventional protein sources were undertaking in order to compare their nutritional status. The data collected were analyzed using descriptive statistics of percentages and inferential statistics of correlation and Analysis of Variance (ANOVA). The results obtained showed that entomophagy has cultural heritage among different tribes in Nigeria and is an acceptable practice; it cuts across every social stratum and is practiced among both major religions. Moreover, insects compared favourably in term of nutrient contents when compared with the conventional animal protein sources analyzed. However, there is a gradual decline in the practice of entomophagy among students, which may be attributed to the influence of western civilization. This study, therefore, recommended an intensification of research and enlightenment of people on the usefulness of entomophagy so as to preserve its cultural heritage as well as boost human food security.

Keywords: entomophagy, food security, malnutrition, poverty alleviation, sociology

Procedia PDF Downloads 121
3661 High Performance Concrete Using “BAUT” (Metal Aggregates) the Gateway to New Concrete Technology for Mega Structures

Authors: Arjun, Gautam, Sanjeev Naval

Abstract:

Concrete technology has been changing rapidly and constantly since its discovery. Concrete is the most widely used man-made construction material, versatility of making concrete is the 2nd largest consumed material on earth. In this paper an effort has been made to use metal aggregates in concrete has been discussed, the metal aggregates has been named as “BAUT” which had outstandingly qualities to resist shear, tension and compression forces. In this paper, COARSE BAUT AGGREGATES (C.B.A.) 10mm & 20mm and FINE BAUT AGGREGATES (F.B.A.) 3mm were divided and used for making high performance concrete (H.P.C). This “BAUT” had cutting edge technology through draft and design by the use of Auto CAD, ANSYS software can be used effectively In this research paper we study high performance concrete (H.P.C) with “BAUT” and consider the grade of M65 and finally we achieved the result of 90-95 Mpa (high compressive strength) for mega structures and irregular structures where center of gravity (CG) is not balanced. High Performance BAUT Concrete is the extraordinary qualities like long-term performance, no sorptivity by BAUT AGGREGATES, better rheological, mechanical and durability proportion that conventional concrete. This high strength BAUT concrete using “BAUT” is applied in the construction of mega structure like skyscrapers, dam, marine/offshore structures, nuclear power plants, bridges, blats and impact resistance structures. High Performance BAUT Concrete which is a controlled concrete possesses invariable high strength, reasonable workability and negligibly permeability as compare to conventional concrete by the mix of Super Plasticizers (SMF), silica fume and fly ash.

Keywords: BAUT, High Strength Concrete, High Performance Concrete, Fine BAUT Aggregate, Coarse BAUT Aggregate, metal aggregates, cutting edge technology

Procedia PDF Downloads 502
3660 Assessment of Soil Quality Indicators in Rice Soils Under Rainfed Ecosystem

Authors: R. Kaleeswari

Abstract:

An investigation was carried out to assess the soil biological quality parameters in rice soils under rainfed and to compare soil quality indexing methods viz., Principal component analysis, Minimum data set and Indicator scoring method and to develop soil quality indices for formulating soil and crop management strategies.Soil samples were collected and analyzed for soil biological properties by adopting standard procedure. Biological indicators were determined for soil quality assessment, viz., microbial biomass carbon and nitrogen (MBC and MBN), potentially mineralizable nitrogen (PMN) and soil respiration and dehydrogenease activity. Among the methods of rice cultivation, Organic nutrition, Integrated Nutrient Management (INM) and System of Rice Intensification (SRI ), rice cultivation registered higher values of MBC, MBN and PMN. Mechanical and conventional rice cultivation registered lower values of biological quality indicators. Organic nutrient management and INM enhanced the soil respiration rate. SRI and aerobic rice cultivation methods increased the rate of soil respiration, while conventional and mechanical rice farming lowered the soil respiration rate. Dehydrogenase activity (DHA) was registered to be higher in soils under organic nutrition and Integrated Nutrient Management INM. System of Rice Intensification SRI and aerobic rice cultivation enhanced the DHA; while conventional and mechanical rice cultivation methods reduced DHA. The microbial biomass carbon (MBC) of the rice soils varied from 65 to 244 mg kg-1. Among the nutrient management practices, INM registered the highest available microbial biomass carbon of 285 mg kg-1.Potentially mineralizable N content of the rice soils varied from 20.3 to 56.8 mg kg-1. Aerobic rice farming registered the highest potentially mineralizable N of 78.9 mg kg-1..The soil respiration rate of the rice soils varied from 60 to 125 µgCO2 g-1. Nutrient management practices ofINM practice registered the highest. soil respiration rate of 129 µgCO2 g-1.The dehydrogenase activity of the rice soils varied from 38.3 to 135.3µgTPFg-1 day-1. SRI method of rice cultivation registered the highest dehydrogenase activity of 160.2 µgTPFg-1 day-1. Soil variables from each PC were considered for minimum soil data set (MDS). Principal component analysis (PCA) was used to select the representative soil quality indicators. In intensive rice cultivating regions, soil quality indicators were selected based on factor loading value and contribution percentage value using principal component analysis (PCA).Variables having significant difference within production systems were used for the preparation of minimum data set (MDS).

Keywords: soil quality, rice, biological properties, PCA analysis

Procedia PDF Downloads 110
3659 Conservation Agriculture under Mediterranean Climate: Effects on below and Above-Ground Processes during Wheat Cultivation

Authors: Vasiliki Kolake, Christos Kavalaris, Sofia Megoudi, Maria Maxouri, Panagiotis A. Karas, Aris Kyparissis, Efi Levizou

Abstract:

Conservation agriculture (CA), is a production system approach that can tackle the challenges of climate change mainly through facilitating carbon storage into the soil and increasing crop resilience. This is extremely important for the vulnerable Mediterranean agroecosystems, which already face adverse environmental conditions. The agronomic practices used in CA, i.e. permanent soil cover and no-tillage, result in reduced soil erosion and increased soil organic matter, preservation of water and improvement of quality and fertility of the soil in the long-term. Thus the functional characteristics and processes of the soil are considerably affected by the implementation of CA. The aim of the present work was to assess the effects of CA on soil nitrification potential and mycorrhizal colonization about the above-ground production in a wheat field. Two adjacent but independent field sites of 1.5ha each were used (Thessaly plain, Central Greece), comprising the no-till and conventional tillage treatments. The no-tillage site was covered by residues of the previous crop (cotton). Potential nitrification and the nitrate and ammonium content of the soil were measured at two different soil depths (3 and 15cm) at 20-days intervals throughout the growth period. Additionally, the leaf area index (LAI) was monitored at the same time-course. The mycorrhizal colonization was measured at the commencement and end of the experiment. At the final harvest, total yield and plant biomass were also recorded. The results indicate that wheat yield was considerably favored by CA practices, exhibiting a 42% increase compared to the conventional tillage treatment. The superior performance of the CA crop was also depicted in the above-ground plant biomass, where a 26% increase was recorded. LAI, which is considered a reliable growth index, did not show statistically significant differences between treatments throughout the growth period. On the contrary, significant differences were recorded in endomycorrhizal colonization one day before the final harvest, with CA plants exhibiting 20% colonization, while the conventional tillage plants hardly reached 1%. The on-going analyses of potential nitrification measurements, as well as nitrate and ammonium determination, will shed light on the effects of CA on key processes in the soil. These results will integrate the assessment of CA impact on certain below and above-ground processes during wheat cultivation under the Mediterranean climate.

Keywords: conservation agriculture, LAI, mycorrhizal colonization, potential nitrification, wheat, yield

Procedia PDF Downloads 130
3658 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston

Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando

Abstract:

The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.

Keywords: computing paradigm, emerging technologies, equitable healthcare, healthy cities, open data, smart city, social innovation

Procedia PDF Downloads 336
3657 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 209
3656 Advanced Magnetic Resonance Imaging in Differentiation of Neurocysticercosis and Tuberculoma

Authors: Rajendra N. Ghosh, Paramjeet Singh, Niranjan Khandelwal, Sameer Vyas, Pratibha Singhi, Naveen Sankhyan

Abstract:

Background: Tuberculoma and neurocysticercosis (NCC) are two most common intracranial infections in developing country. They often simulate on neuroimaging and in absence of typical imaging features cause significant diagnostic dilemmas. Differentiation is extremely important to avoid empirical exposure to antitubercular medications or nonspecific treatment causing disease progression. Purpose: Better characterization and differentiation of CNS tuberculoma and NCC by using morphological and multiple advanced functional MRI. Material and Methods: Total fifty untreated patients (20 tuberculoma and 30 NCC) were evaluated by using conventional and advanced sequences like CISS, SWI, DWI, DTI, Magnetization transfer (MT), T2Relaxometry (T2R), Perfusion and Spectroscopy. rCBV,ADC,FA,T2R,MTR values and metabolite ratios were calculated from lesion and normal parenchyma. Diagnosis was confirmed by typical biochemical, histopathological and imaging features. Results: CISS was most useful sequence for scolex detection (90% on CISS vs 73% on routine sequences). SWI showed higher scolex detection ability. Mean values of ADC, FA,T2R from core and rCBV from wall of lesion were significantly different in tuberculoma and NCC (P < 0.05). Mean values of rCBV, ADC, T2R and FA for tuberculoma and NCC were (3.36 vs1.3), (1.09x10⁻³vs 1.4x10⁻³), (0.13 x10⁻³ vs 0.09 x10⁻³) and (88.65 ms vs 272.3 ms) respectively. Tuberculomas showed high lipid peak, more choline and lower creatinine with Ch/Cr ratio > 1. T2R value was most significant parameter for differentiation. Cut off values for each significant parameters have proposed. Conclusion: Quantitative MRI in combination with conventional sequences can better characterize and differentiate similar appearing tuberculoma and NCC and may be incorporated in routine protocol which may avoid brain biopsy and empirical therapy.

Keywords: advanced functional MRI, differentiation, neurcysticercosis, tuberculoma

Procedia PDF Downloads 568
3655 Comparative Studies on Spontaneous Imbibition of Surfactant/Alkaline Solution in Carbonate Rocks

Authors: M. Asgari, N. Heydari, N. Shojai Kaveh, S. N. Ashrafizadeh

Abstract:

Chemical flooding methods are having importance in enhanced oil recovery to recover the trapped oil after conventional recovery, as conventional oil resources become scarce. The surfactant/alkaline process consists of injecting alkali and synthetic surfactant. The addition of surfactant to injected water reduces oil/water IFT and/or alters wettability. The alkali generates soap in situ by reaction between the alkali and naphthenic acids in the crude oil. Oil recovery in fractured reservoirs mostly depends on spontaneous imbibition (SI) of brine into matrix blocks. Thus far, few efforts have been made toward understanding the relative influence of capillary and gravity forces on the fluid flow. This paper studies the controlling mechanisms of spontaneous imbibition process in chalk formations by consideration of type and concentration of surfactants, CMC, pH and alkaline reagent concentration. Wetting properties of carbonate rock have been investigated by means of contact-angle measurements. Interfacial-tension measurements were conducted using spinning drop method. Ten imbibition experiments were conducted in atmospheric pressure and various temperatures from 30°C to 50°C. All experiments were conducted above the CMC of each surfactant. The experimental results were evaluated in terms of ultimate oil recovery and reveal that wettability alteration achieved by nonionic surfactant, which led to imbibition of brine sample containing the nonionic surfactant, while IFT value was not in range of ultra low. The displacement of oil was initially dominated by capillary forces. However, for cationic surfactant, gravity forces was the dominant force for oil production by surfactant solution to overcome the negative capillary pressure.

Keywords: alkaline, capillary, gravity, imbibition, surfactant, wettability

Procedia PDF Downloads 230
3654 Long Term Follow-Up, Clinical Outcomes and Quality of Life after Total Arterial Revascularisation versus Conventional Coronary Surgery: A Retrospective Study

Authors: Jitendra Jain, Cassandra Hidajat, Hansraj Riteesh Bookun

Abstract:

Graft patency underpins long-term prognosis after coronary artery bypass grafting surgery (CABG). The benefits of the combined use of only the left internal mammary artery and radial artery, referred to as total arterial revascularisation (TAR), on long-term clinical outcomes and quality of life are relatively unknown. The aim of this study was to identify whether there were differences in long term clinical outcomes between recipients of TAR compared to a cohort of mostly arterial revascularization involving the left internal mammary, at least one radial artery and at least one saphenous vein graft. A retrospective analysis was performed on all patients who underwent TAR or were re-vascularized with supplementary saphenous vein graft from February 1996 to December 2004. Telephone surveys were conducted to obtain clinical outcome parameters including major adverse cardiac and cerebrovascular events (MACCE) and Short Form (SF-36v2) Health Survey responses. A total of 176 patients were successfully contacted to obtain postop follow up results. The mean follow-up length from time of surgery in our study was TAR 12.4±1.8 years and conventional 12.6±2.1. PCS score was TAR 45.9±8.8 vs LIMA/Rad/ SVG 44.9±9.2 (p=0.468) and MCS score was TAR 52.0±8.9 vs LIMA/Rad/SVG 52.5±9.3 (p=0.723). There were no significant differences between groups for NYHA class 3+ TAR 9.4% vs. LIMA/Rad/SVG 6.6%; or CCS 3+ TAR 2.35% vs. LIMA/Rad/SVG 0%.

Keywords: CABG; MACCEs; quality of life; total arterial revascularisation

Procedia PDF Downloads 218
3653 Development of Hierarchically Structured Tablets with 3D Printed Inclusions for Controlled Drug Release

Authors: Veronika Lesáková, Silvia Slezáková, František Štěpánek

Abstract:

Drug dosage forms consisting of multi-unit particle systems (MUPS) for modified drug release provide a promising route for overcoming the limitation of conventional tablets. Despite the conventional use of pellets as units for MUP systems, 3D printed polymers loaded with a drug seem like an interesting candidate due to the control over dosing that 3D printing mechanisms offer. Further, 3D printing offers high flexibility and control over the spatial structuring of a printed object. The final MUPS tablets include PVP and HPC as granulate with other excipients, enabling the compaction process of this mixture with 3D printed inclusions, also termed minitablets. In this study, we have developed the multi-step production process for MUPS tablets, including the 3D printing technology. The MUPS tablets with incorporated 3D printed minitablets are a complex system for drug delivery, providing modified drug release. Such structured tablets promise to reduce drug fluctuations in blood, risk of local toxicity, and increase bioavailability, resulting in an improved therapeutic effect due to the fast transfer into the small intestine, where particles are evenly distributed. Drug loaded 3D printed minitablets were compacted into the excipient mixture, influencing drug release through varying parameters, such as minitablets size, matrix composition, and compaction parameters. Further, the mechanical properties and morphology of the final MUPS tablets were analyzed as many properties, such as plasticity and elasticity, can significantly influence the dissolution profile of the drug.

Keywords: 3D printing, dissolution kinetics, drug delivery, hot-melt extrusion

Procedia PDF Downloads 92
3652 Realizing the Full Potential of Islamic Banking System: Proposed Suitable Legal Framework for Islamic Banking System in Tanzania

Authors: Maulana Ayoub Ali, Pradeep Kulshrestha

Abstract:

Laws of any given secular state have a huge contribution in the growth of the Islamic banking system because the system uses conventional laws to govern its activities. Therefore, the former should be ready to accommodate the latter in order to make the Islamic banking system work properly without affecting the current conventional banking system and therefore without affecting its system. Islamic financial rules have been practiced since the birth of Islam. Following the recent world economic challenges in the financial sector, a quick rebirth of the contemporary Islamic ethical banking system took place. The coming of the Islamic banking system is due to various reasons including but not limited to the failure of the interest based economy in solving financial problems around the globe. Therefore, the Islamic banking system has been adopted as an alternative banking system in order to recover the highly damaged global financial sector. But the Islamic banking system has been facing a number of challenges which hinder its smooth operation in different parts of the world. It has not been the aim of this paper to discuss other challenges rather than the legal ones, but the same was partly discussed when it was justified that it was proper to do so. Generally, there are so many things which have been discovered in the course of writing this paper. The most important part is the issue of the regulatory and supervisory framework for the Islamic banking system in Tanzania and in other nations is considered to be a crucial part for the development of the Islamic banking industry. This paper analyses what has been observed in the study on that area and recommends for necessary actions to be taken on board in a bid to make Islamic banking system reach its climax of serving the larger community by providing ethical, equitable, affordable, interest-free and society cantered banking system around the globe.

Keywords: Islamic banking, interest free banking, ethical banking, legal framework

Procedia PDF Downloads 149
3651 Cognitive Footprints: Analytical and Predictive Paradigm for Digital Learning

Authors: Marina Vicario, Amadeo Argüelles, Pilar Gómez, Carlos Hernández

Abstract:

In this paper, the Computer Research Network of the National Polytechnic Institute of Mexico proposes a paradigmatic model for the inference of cognitive patterns in digital learning systems. This model leads to metadata architecture useful for analysis and prediction in online learning systems; especially on MOOc's architectures. The model is in the design phase and expects to be tested through an institutional of courses project which is going to develop for the MOOc.

Keywords: cognitive footprints, learning analytics, predictive learning, digital learning, educational computing, educational informatics

Procedia PDF Downloads 477
3650 An Intelligent Cloud Radio Access Network (RAN) Architecture for Future 5G Heterogeneous Wireless Network

Authors: Jin Xu

Abstract:

5G network developers need to satisfy the necessary requirements of additional capacity from massive users and spectrally efficient wireless technologies. Therefore, the significant amount of underutilized spectrum in network is motivating operators to combine long-term evolution (LTE) with intelligent spectrum management technology. This new LTE intelligent spectrum management in unlicensed band (LTE-U) has the physical layer topology to access spectrum, specifically the 5-GHz band. We proposed a new intelligent cloud RAN for 5G.

Keywords: cloud radio access network, wireless network, cloud computing, multi-agent

Procedia PDF Downloads 424
3649 Distributed Key Management With Less Transmitted Messaged In Rekeying Process To Secure Iot Wireless Sensor Networks In Smart-Agro

Authors: Safwan Mawlood Hussien

Abstract:

Internet of Things (IoT) is a promising technology has received considerable attention in different fields such as health, industry, defence, and agro, etc. Due to the limitation capacity of computing, storage, and communication, IoT objects are more vulnerable to attacks. Many solutions have been proposed to solve security issues, such as key management using symmetric-key ciphers. This study provides a scalable group distribution key management based on ECcryptography; with less transmitted messages The method has been validated through simulations in OMNeT++.

Keywords: elliptic curves, Diffie–Hellman, discrete logarithm problem, secure key exchange, WSN security, IoT security, smart-agro

Procedia PDF Downloads 119
3648 The Gold Standard Treatment Plan for Vitiligo: A Review on Conventional and Updated Treatment Methods

Authors: Kritin K. Verma, Brian L. Ransdell

Abstract:

White patches are a symptom of vitiligo, a chronic autoimmune dermatological condition that causes a loss of pigmentation in the skin. Vitiligo can cause issues of self-esteem and quality of life while also progressing the development of other autoimmune diseases. Current treatments in allopathy and homeopathy exist; some treatments have been found to be toxic, whereas others have been helpful. Allopathy has seemed to offer several treatment plans, such as phototherapy, skin lightening preparations, immunosuppressive drugs, combined modality therapy, and steroid medications to improve vitiligo. This presentation will review the FDA-approved topical cream, Opzelura, a JAK inhibitor, and its effects on limiting vitiligo progression. Meanwhile, other non-conventional methods, such as Arsenic Sulphuratum Flavum used in homeopathy, will be debunked based on current literature. Most treatments still serve to arrest progression and induce skin repigmentation. Treatment plans may differ between patients due to depigmentation location on the skin. Since there is no gold standard plan for treating patients with vitiligo, the oral presentation will review all topical and systemic pharmacological therapies that fight the depigmentation of the skin and categorize their validity from a systematic review of the literature. Since treatment plans are limited in nature, all treatment methods will be mentioned and an attempt will be made to make a golden standard treatment process for these patients.

Keywords: vitiligo, phototherapy, immunosuppressive drugs, skin lightening preparations, combined modality therapy, arsenic sulphuratum flavum, homeopathy, allopathy, golden standard, Opzelura

Procedia PDF Downloads 87
3647 Electrochemical Recovery of Lithium from Geothermal Brines

Authors: Sanaz Mosadeghsedghi, Mathew Hudder, Mohammad Ali Baghbanzadeh, Charbel Atallah, Seyedeh Laleh Dashtban Kenari, Konstantin Volchek

Abstract:

Lithium has recently been extensively used in lithium-ion batteries (LIBs) for electric vehicles and portable electronic devices. The conventional evaporative approach to recover and concentrate lithium is extremely slow and may take 10-24 months to concentrate lithium from dilute sources, such as geothermal brines. To response to the increasing industrial lithium demand, alternative extraction and concentration technologies should be developed to recover lithium from brines with low concentrations. In this study, a combination of electrocoagulation (EC) and electrodialysis (ED) was evaluated for the recovery of lithium from geothermal brines. The brine samples in this study, collected in Western Canada, had lithium concentrations of 50-75 mg/L on a background of much higher (over 10,000 times) concentrations of sodium. This very high sodium-to-lithium ratio poses challenges to the conventional direct-lithium extraction processes which employ lithium-selective adsorbents. EC was used to co-precipitate lithium using a sacrificial aluminium electrode. The precipitate was then dissolved, and the leachate was treated using ED to separate and concentrate lithium from other ions. The focus of this paper is on the study of ED, including a two-step ED process that included a mono-valent selective stage to separate lithium from multi-valent cations followed by a bipolar ED stage to convert lithium chloride (LiCl) to LiOH product. Eventually, the ED cell was reconfigured using mono-valent cation exchange with the bipolar membranes to combine the two ED steps in one. Using this process at optimum conditions, over 95% of the co-existing cations were removed and the purity of lithium increased to over 90% in the final product.

Keywords: electrochemical separation, electrocoagulation, electrodialysis, lithium extraction

Procedia PDF Downloads 94
3646 Increasing Solubility and Bioavailability of Fluvastatin through Transdermal Nanoemulsion Gel Delivery System for the Treatment of Osteoporosis

Authors: Ramandeep Kaur, Makula Ajitha

Abstract:

Fluvastatin has been reported for increasing bone mineral density in osteoporosis since last decade. Systemically administered drug undergoes extensive hepatic first-pass metabolism, thus very small amount of drug reaches the bone tissue which is highly insignificant. The present study aims to deliver fluvastatin in the form of nanoemulsion (NE) gel directly to the bone tissue through transdermal route thereby bypassing hepatic first pass metabolism. The NE formulation consisted of isopropyl myristate as oil, tween 80 as surfactant, transcutol as co-surfactant and water as the aqueous phase. Pseudoternary phase diagrams were constructed using aqueous titration method and NE’s obtained were subjected to thermodynamic-kinetic stability studies. The stable NE formulations were evaluated for their droplet size, zeta potential, and transmission electron microscopy (TEM). The nano-sized formulations were incorporated into 0.5% carbopol 934 gel matrix. Ex-vivo permeation behaviour of selected formulations through rat skin was investigated and compared with the conventional formulations (suspension and emulsion). Further, in-vivo pharmacokinetic study was carried using male Wistar rats. The optimized NE formulations mean droplet size was 11.66±3.2 nm with polydispersity index of 0.117. Permeation flux of NE gel formulations was found significantly higher than the conventional formulations i.e. suspension and emulsion. In vivo pharmacokinetic study showed significant increase in bioavailability (1.25 fold) of fluvastatin than oral formulation. Thus, it can be concluded that NE gel was successfully developed for transdermal delivery of fluvastatin for the treatment of osteoporosis.

Keywords: fluvastatin, nanoemulsion gel, osteoporosis, transdermal

Procedia PDF Downloads 189