Search results for: Reducing sugar
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 979

Search results for: Reducing sugar

109 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic kidney disease, microfluidics, linear regression, VITROS analyzer, urinary albumin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
108 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates Connected and Autonomous Vehicles (CAVs) fuel consumption and air pollutants including Carbon Monoxide (CO), Particulate Matter (PM), and Nitrogen Oxides (NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: Connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 397
107 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems

Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar

Abstract:

Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.

Keywords: Aspiration efficiency, energy, particulate matter, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 435
106 Reducing Defects through Organizational Learning within a Housing Association Environment

Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton

Abstract:

Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.

Keywords: Defects, new homes, housing associations, organizational learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865
105 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
104 Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V

Authors: Mukul Shukla, Rasheedat M. Mahamood, Esther T. Akinlabi, Sisa. Pityana

Abstract:

Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.

Keywords: Laser Metal Deposition, Material Efficiency, Microstructure, Ti6Al4V.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3594
103 Enhancing Warehousing Operations in Cold Supply Chain through the Use of IoT and LiFi Technologies

Authors: S. El-Gamal, P. Hossam, A. Abd El Aziz, R. Mahmoud, A. Hassan, D. Hilal, E. Ayman, H. Haytham, O. Khamis

Abstract:

Several concerns fall upon the supply chain especially in cold supply chains. These concerns are mainly in the distribution and storage phases. This research focuses on the storage area, which contains several activities such as the picking activity that faces a lot of obstacles and challenges. The implementation of IoT solutions enables businesses to monitor the temperature of food items, which is perhaps the most critical parameter in cold chains. Therefore, the research at hand proposes a practical solution that would help in eliminating the problems related to ineffective picking for products especially fish and seafood products by using IoT technology, most notably LiFi technology; thus, guaranteeing sufficient picking, reducing waste, and consequently lowering costs. A prototype was specially designed and examined. This research is a single case study research. Two methods of data collection were used; observation and semi-structured interviews. Semi-structured interviews were conducted with managers and a decision maker at one of the biggest retail stores Carrefour, Alexandria, Egypt to validate the problem and the proposed practical solution using IoT and LiFi technology. A total of three interviews were conducted. As a result, a SWOT analysis was achieved in order to highlight all the strengths and weaknesses of using the recommended LiFi solution in the picking process. According to the investigations, it was found that, the use of IoT and LiFi technology is cost effective, efficient, and reduces human errors, minimizes the percentage of product waste and thus saves money and cost. Therefore, increasing customer satisfaction and profits could be achieved.

Keywords: Cold supply chain, IoT, LiFi, warehousing operation, picking process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 424
102 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
101 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 776
100 Reducing the Imbalance Penalty through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: H. Anıl, G. Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations, since the geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning and time series methods, the total generation of the power plants belonging to Zorlu Doğal Electricity Generation, which has a high installed capacity in terms of geothermal, was predicted for the first one-week and first two-weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: Machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162
99 The Impact of HIV/AIDS on Micro-enterprise Development in Kenya: A Study of Obunga Slum in Kisumu

Authors: C. A. Oloo, C. Ojwang

Abstract:

The performances of small and medium enterprises have stagnated in the last two decades. This has mainly been due to the emergence of HIV / Aids. The disease has had a detrimental effect on the general economy of the country leading to morbidity and mortality of the Kenyan workforce in their primary age. The present study sought to establish the economic impact of HIV / Aids on the micro-enterprise development in Obunga slum – Kisumu, in terms of production loss, increasing labor related cost and to establish possible strategies to address the impact of HIV / Aids on microenterprises. The study was necessitated by the observation that most micro-enterprises in the slum are facing severe economic and social crisis due to the impact of HIV / Aids, they get depleted and close down within a short time due to death of skilled and experience workforce. The study was carried out between June 2008 and June 2009 in Obunga slum. Data was subjected to computer aided statistical analysis that included descriptive statistic, chi-squared and ANOVA techniques. Chi-squared analysis on the micro-enterprise owners opinion on the impact of HIV / Aids on depletion of microenterprise compared to other diseases indicated high levels of the negative effects of the disease at significance levels of P<0.01. Analysis of variance on the impact of HIV / Aids on the performance and productivity of micro-enterprises also indicated a negative effect on the general performance of micro-enterprise at significance levels of P<0.01. Therefore reducing the negative impacts of HIV/Aids on micro-enterprise development, there is need to improve the socioeconomic environment, mobilize donors and stake holders in training and funding, and review the current strategies for addressing the disease. Further conclusive research should also be conducted on a bigger scale.

Keywords: Entrepreneurship, HIV-AIDS, Micro-enterprise, Poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2372
98 Experimental Investigation of Heat Transfer and Flow of Nano Fluids in Horizontal Circular Tube

Authors: Abdulhassan Abd. K, Sattar Al-Jabair, Khalid Sultan

Abstract:

We have measured the pressure drop and convective heat transfer coefficient of water – based AL(25nm),AL2O3(30nm) and CuO(50nm) Nanofluids flowing through a uniform heated circular tube in the fully developed laminar flow regime. The experimental results show that the data for Nanofluids friction factor show a good agreement with analytical prediction from the Darcy's equation for single-phase flow. After reducing the experimental results to the form of Reynolds, Rayleigh and Nusselt numbers. The results show the local Nusselt number and temperature have distribution with the non-dimensional axial distance from the tube entry. Study decided that thenNanofluid as Newtonian fluids through the design of the linear relationship between shear stress and the rate of stress has been the study of three chains of the Nanofluid with different concentrations and where the AL, AL2O3 and CuO – water ranging from (0.25 - 2.5 vol %). In addition to measuring the four properties of the Nanofluid in practice so as to ensure the validity of equations of properties developed by the researchers in this area and these properties is viscosity, specific heat, and density and found that the difference does not exceed 3.5% for the experimental equations between them and the practical. The study also demonstrated that the amount of the increase in heat transfer coefficient for three types of Nano fluid is AL, AL2O3, and CuO – Water and these ratios are respectively (45%, 32%, 25%) with insulation and without insulation (36%, 23%, 19%), and the statement of any of the cases the best increase in heat transfer has been proven that using insulation is better than not using it. I have been using three types of Nano particles and one metallic Nanoparticle and two oxide Nanoparticle and a statement, whichever gives the best increase in heat transfer.

Keywords: Newtonian, NUR factor, Brownian motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
97 Forgeability Study of Medium Carbon Micro-Alloyed Forging Steel

Authors: M. I. Equbal, R.K. Ohdar, B. Singh, P. Talukdar

Abstract:

Micro-alloyed steel components are used in automotive industry for the necessity to make the manufacturing process cycles shorter when compared to conventional steel by eliminating heat treatment cycles, so an important saving of costs and energy can be reached by reducing the number of operations. Microalloying elements like vanadium, niobium or titanium have been added to medium carbon steels to achieve grain refinement with or without precipitation strengthening along with uniform microstructure throughout the matrix. Present study reports the applicability of medium carbon vanadium micro-alloyed steel in hot forging. Forgeability has been determined with respect to different cooling rates, after forging in a hydraulic press at 50% diameter reduction in temperature range of 900-11000C. Final microstructures, hardness, tensile strength, and impact strength have been evaluated. The friction coefficients of different lubricating conditions, viz., graphite in hydraulic oil, graphite in furnace oil, DF 150 (Graphite, Water-Based) die lubricant and dry or without any lubrication were obtained from the ring compression test for the above micro-alloyed steel. Results of ring compression tests indicate that graphite in hydraulic oil lubricant is preferred for free forging and dry lubricant is preferred for die forging operation. Exceptionally good forgeability and high resistance to fracture, especially for faster cooling rate has been observed for fine equiaxed ferrite-pearlite grains, some amount of bainite and fine precipitates of vanadium carbides and carbonitrides. The results indicated that the cooling rate has a remarkable effect on the microstructure and mechanical properties at room temperature.

Keywords: Cooling rate, Hot forging, Micro-alloyed, Ring compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3644
96 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach

Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.

Keywords: CO2 emissions, performance based design, optimization, sustainable design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
95 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis

Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus

Abstract:

Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.

Keywords: Additive Manufacturing, Internal topologies, Porosity, Rapid Prototyping, Selective Laser Melting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
94 Olive Leaves Extract Restored the antioxidant Perturbations in Red Blood Cells Hemolysate in Streptozotocin Induced Diabetic Rats

Authors: Ismail I. Abo Ghanema, Kadry M. Sadek

Abstract:

Oxidative stress and overwhelming free radicals associated with diabetes mellitus are likely to be linked with development of certain complication such as retinopathy, nephropathy and neuropathy. Treatment of diabetic subjects with antioxidant may be of advantage in attenuating these complications. Olive leaf (Oleaeuropaea), has been endowed with many beneficial and health promoting properties mostly linked to its antioxidant activity. This study aimed to evaluate the significance of supplementation of Olive leaves extract (OLE) in reducing oxidative stress, hyperglycemia and hyperlipidemia in Sterptozotocin (STZ)- induced diabetic rats. After induction of diabetes, a significant rise in plasma glucose, lipid profiles except High density lipoproteincholestrol (HDLc), malondialdehyde (MDA) and significant decrease of plasma insulin, HDLc and Plasma reduced glutathione GSH as well as alteration in enzymatic antioxidants was observed in all diabetic animals. During treatment of diabetic rats with 0.5g/kg body weight of Olive leaves extract (OLE) the levels of plasma (MDA) ,(GSH), insulin, lipid profiles along with blood glucose and erythrocyte enzymatic antioxidant enzymes were significantly restored to establish values that were not different from normal control rats. Untreated diabetic rats on the other hand demonstrated persistent alterations in the oxidative stress marker (MDA), blood glucose, insulin, lipid profiles and the antioxidant parameters. These results demonstrate that OLE may be of advantage in inhibiting hyperglycemia, hyperlipidemia and oxidative stress induced by diabetes and suggest that administration of OLE may be helpful in the prevention or at least reduced of diabetic complications associated with oxidative stress.

Keywords: Diabetes mellitus, olive leaves, oxidative stress, red blood cells

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3027
93 Phytochemical Analysis and Antioxidant Activity of Colocasia esculenta (L.) Leaves

Authors: Amit Keshav, Alok Sharma, Bidyut Mazumdar

Abstract:

Colocasia esculenta leaves and roots are widely used in Asian countries, such as, India, Srilanka and Pakistan, as food and feed material. The root is high in carbohydrates and rich in zinc. The leaves and stalks are often traditionally preserved to be eaten in dry season. Leaf juice is stimulant, expectorant, astringent, appetizer, and otalgia. Looking at the medicinal uses of the plant leaves; phytochemicals were extracted from the plant leaves and were characterized using Fourier-transform infrared spectroscopy (FTIR) to find the functional groups. Phytochemical analysis of Colocasia esculenta (L.) leaf was studied using three solvents (methanol, chloroform, and ethanol) with soxhlet apparatus. Powder of the leaves was employed to obtain the extracts, which was qualitatively and quantitatively analyzed for phytochemical content using standard methods. Phytochemical constituents were abundant in the leave extract. Leaf was found to have various phytochemicals such as alkaloids, glycosides, flavonoids, terpenoids, saponins, oxalates and phenols etc., which could have lot of medicinal benefits such as reducing headache, treatment of congestive heart failure, prevent oxidative cell damage etc. These phytochemicals were identified using UV spectrophotometer and results were presented. In order to find the antioxidant activity of the extract, DPPH (2,2-diphenyl-1-picrylhydrazyl) method was employed using ascorbic acid as standard. DPPH scavenging activity of ascorbic acid was found to be 84%, whereas for ethanol it was observed to be 78.92%, for methanol: 76.46% and for chloroform: 72.46%. Looking at the high antioxidant activity, Colocasia esculenta may be recommended for medicinal applications. The characterizations of functional groups were analyzed using FTIR spectroscopy.

Keywords: Antioxidant activity, Colocasia esculenta, leaves, characterization, FTIR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
92 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
91 Nonlinear Multivariable Analysis of CO2 Emissions in China

Authors: Hsiao-Tien Pao, Yi-Ying Li, Hsin-Chia Fu

Abstract:

This paper addressed the impacts of energy consumption, economic growth, financial development, and population size on environmental degradation using grey relational analysis (GRA) for China, where foreign direct investment (FDI) inflows is the proxy variable for financial development. The more recent historical data during the period 2004–2011 are used, because the use of very old data for data analysis may not be suitable for rapidly developing countries. The results of the GRA indicate that the linkage effects of energy consumption–emissions and GDP–emissions are ranked first and second, respectively. These reveal that energy consumption and economic growth are strongly correlated with emissions. Higher economic growth requires more energy consumption and increasing environmental pollution. Likewise, more efficient energy use needs a higher level of economic development. Therefore, policies to improve energy efficiency and create a low-carbon economy can reduce emissions without hurting economic growth. The finding of FDI–emissions linkage is ranked third. This indicates that China do not apply weak environmental regulations to attract inward FDI. Furthermore, China’s government in attracting inward FDI should strengthen environmental policy. The finding of population–emissions linkage effect is ranked fourth, implying that population size does not directly affect CO2 emissions, even though China has the world’s largest population, and Chinese people are very economical use of energy-related products. Overall, the energy conservation, improving efficiency, managing demand, and financial development, which aim at curtailing waste of energy, reducing both energy consumption and emissions, and without loss of the country’s competitiveness, can be adopted for developing economies. The GRA is one of the best way to use a lower data to build a dynamic analysis model.

Keywords: Grey relational analysis, foreign direct investment, CO2 emissions, China.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
90 Formulation Development and Moiturising Effects of a Topical Cream of Aloe vera Extract

Authors: Akhtar N, Khan BA, Khan MS, Mahmood T, Khan HMS, Iqbal M, Bashir S

Abstract:

This study was designed to formulate, pharmaceutically evaluate a topical skin-care cream (w/o emulsion) of Aloe Vera versus its vehicle (Base) as control and determine their effects on Stratum Corneum (SC) water content and Transepidermal water loss (TEWL). Base containing no extract and a Formulation containing 3% concentrated extract of Aloe Vera was developed by entrapping in the inner aqueous phase of w/o emulsion (cream). Lemon oil was incorporated to improve the odor. Both the Base and Formulation were stored at 8°C ±0.1°C (in refrigerator), 25°C±0.1°C, 40°C±0.1°C and 40°C± 0.1°C with 75% RH (in incubator) for a period of 4 weeks to predict their stability. The evaluation parameters consisted of color, smell, type of emulsion, phase separation, electrical conductivity, centrifugation, liquefaction and pH. Both the Base and Formulation were applied to the cheeks of 21 healthy human volunteers for a period of 8 weeks Stratum corneum (SC) water content and Transepidermal water loss (TEWL) were monitored every week to measure any effect produced by these topical creams. The expected organoleptic stability of creams was achieved from 4 weeks in-vitro study period. Odor was disappeared with the passage of time due to volatilization of lemon oil. Both the Base and Formulation produced significant (p≤0.05) changes in TEWL with respect to time. SC water content was significantly (p≤0.05) increased by the Formulation while the Base has insignificant (p 0.05) effects on SC water content. The newly formulated cream of Aloe Vera, applied is suitable for improvement and quantitative monitoring of skin hydration level (SC water content/ moisturizing effects) and reducing TEWL in people with dry skin.

Keywords: Aloe Vera; Skin; Stratum corneum (SC) water content and Transepidermal water loss (TEWL).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7866
89 Using Artificial Neural Network and Leudeking-Piret Model in the Kinetic Modeling of Microbial Production of Poly-β- Hydroxybutyrate

Authors: A.Qaderi, A. Heydarinasab, M. Ardjmand

Abstract:

Poly-β-hydroxybutyrate (PHB) is one of the most famous biopolymers that has various applications in production of biodegradable carriers. The most important strategy for enhancing efficiency in production process and reducing the price of PHB, is the accurate expression of kinetic model of products formation and parameters that are effective on it, such as Dry Cell Weight (DCW) and substrate consumption. Considering the high capabilities of artificial neural networks in modeling and simulation of non-linear systems such as biological and chemical industries that mainly are multivariable systems, kinetic modeling of microbial production of PHB that is a complex and non-linear biological process, the three layers perceptron neural network model was used in this study. Artificial neural network educates itself and finds the hidden laws behind the data with mapping based on experimental data, of dry cell weight, substrate concentration as input and PHB concentration as output. For training the network, a series of experimental data for PHB production from Hydrogenophaga Pseudoflava by glucose carbon source was used. After training the network, two other experimental data sets that have not intervened in the network education, including dry cell concentration and substrate concentration were applied as inputs to the network, and PHB concentration was predicted by the network. Comparison of predicted data by network and experimental data, indicated a high precision predicted for both fructose and whey carbon sources. Also in present study for better understanding of the ability of neural network in modeling of biological processes, microbial production kinetic of PHB by Leudeking-Piret experimental equation was modeled. The Observed result indicated an accurate prediction of PHB concentration by artificial neural network higher than Leudeking- Piret model.

Keywords: Kinetic Modeling, Poly-β-Hydroxybutyrate (PHB), Hydrogenophaga Pseudoflava, Artificial Neural Network, Leudeking-Piret

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4784
88 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterwards. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and at times better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: Channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 329
87 An Overview of Project Management Application in Computational Fluid Dynamics

Authors: Sajith Sajeev

Abstract:

The application of Computational Fluid Dynamics (CFD) is widespread in engineering and industry, including aerospace, automotive, and energy. CFD simulations necessitate the use of intricate mathematical models and a substantial amount of computational power to accurately describe the behavior of fluids. The implementation of CFD projects can be difficult, and a well-structured approach to project management is required to assure the timely and cost-effective delivery of high-quality results. This paper's objective is to provide an overview of project management in CFD, including its problems, methodologies, and best practices. The study opens with a discussion of the difficulties connected with CFD project management, such as the complexity of the mathematical models, the need for extensive computational resources, and the difficulties associated with validating and verifying the results. In addition, the study examines the project management methodologies typically employed in CFD, such as the Traditional/Waterfall model, Agile and Scrum. Comparisons are made between the advantages and disadvantages of each technique, and suggestions are made for their effective implementation in CFD projects. The study concludes with a discussion of the best practices for project management in CFD, including the utilization of a well-defined project scope, a clear project plan, and effective teamwork. In addition, it highlights the significance of continuous process improvement and the utilization of metrics to monitor progress and discover improvement opportunities. This article is a resource for project managers, researchers, and practitioners in the field of CFD. It can aid in enhancing project outcomes, reducing risks, and enhancing the productivity of CFD projects. This paper provides a complete overview of project management in CFD and is a great resource for individuals who wish to implement efficient project management methods in CFD projects.

Keywords: Project management, Computational Fluid Dynamics, Traditional/Waterfall methodology, agile methodology, scrum methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
86 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3754
85 The Integration of Cleaner Production Innovation and Creativity for Supply Chain Sustainability of Bogor Batik SMEs

Authors: Sawarni Hasibuan, Juliza Hidayati

Abstract:

Competitiveness and sustainability issues not only put pressure on big companies, but also small and medium enterprises (SMEs). SMEs Batik Bogor is one of the local culture-based creative industries in Bogor city which is also dealing with the issue of sustainability. The purpose of this research is to develop framework of sustainability at SMEs Batik Indonesia case of SMEs Batik Bogor by integrating innovation of cleaner production in its supply chain. The approach used is desk study, field survey, in-depth interviews, and benchmarking best practices of SMEs sustainability. In-depth interviews involve stakeholders to identify the needs and standards of sustainability of SMEs Batik. Data analysis was done by benchmarking method, Multi Dimension Scaling (MDS) method, and Strength, Weakness, Opportunity, Threat (SWOT) analysis. The results recommend the framework of sustainability for SMEs Batik in Indonesia. The sustainability status of SMEs Batik Bogor is classified as Moderate Sustainable. Factors that support the sustainability of SMEs Batik Bogor such is a strong commitment of top management in adopting cleaner production innovation and creativity approach. Successful cleaner production innovations are implemented primarily in the substitution of dye materials from toxic to non-toxic, reducing the intensity of non-renewable energy use, as well as the reuse and recycle of solid waste. “Mosaic Batik” is one of the innovations of solid waste utilization of batik waste produced by company R&D center that gives benefit to three pillars of sustainability, that is financial benefit, environmental benefit, and social benefit. The sustainability of SMEs Batik Bogor cannot be separated from the support of Bogor City Government which proactively facilitates the promotion of sustainable innovation produced by SMEs Batik Bogor.

Keywords: Cleaner production innovation, creativity, SMEs Batik, sustainability supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 839
84 Comparative Efficacy of Pomegranate Juice, Peel and Seed Extract in the Stabilization of Corn Oil under Accelerated Conditions

Authors: Zoi Konsoula

Abstract:

Antioxidant-rich extracts were prepared from pomegranate peels, seeds and juice using methanol and ethanol and their antioxidant activity was evaluated by the 1,1-diphenyl-2-picrylhydrazine (DPPH) radical scavenging and Ferric Reducing Antioxidant Power (FRAP) method. Both analytical methods indicated a higher antioxidant activity in extracts prepared from peels, which was comparable to that of butylated hydroxytoluene (BHT). Furthermore, the antioxidant activity was correlated to the phenolic and flavonoid content of the various extracts. The antioxidant effectiveness of the extracts was also assessed using corn oil as the oxidation substrate. More specifically, preheated corn oil samples stabilized with extracts at a concentration of 250 ppm, 500 ppm or 1,000 ppm were subjected to accelerated aging (100 oC, 10 days) and the extent of oxidative alteration was followed by the measurement of the peroxide, conjugated dienes and trienes, as well as p-aniside value. BHT at its legal limit (200 ppm) served as standard besides the control sample. Results from the different parameters were in agreement with each other suggesting that pomegranate extracts can stabilize corn oil effectively under accelerated conditions, at all concentrations tested. However, the magnitude of oil stabilization depended strongly on the amount of extract added and this was positively correlated with their phenolic content. Pomegranate peel extracts, which exhibited the highest not only phenolic and flavonoid content but also antioxidant activity, were more potent in inhibiting oxidative deterioration. Both methanolic and ethanolic peel extracts at a concentration of 500 ppm exerted a stabilizing effect comparable to that of BHT, while at a concentration of 1000 ppm they exhibited higher stabilization efficiency in comparison to BHT. Finally, heating oil samples resulted in a time dependent decrease in their antioxidant capacity. Samples containing peel extracts appeared to retain their antioxidant capacity for a longer period, indicating that these extracts contained active compounds that offered superior antioxidant protection to corn oil.

Keywords: Antioxidant activity, corn oil, oxidative deterioration, pomegranate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
83 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes

Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi

Abstract:

One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.

Keywords: Burn-up, heavy water reactor, minor actinides, Monte Carlo, proliferation resistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
82 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4436
81 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria

Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar

Abstract:

Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.

Keywords: Biocapacity, carbon footprint, ecological footprint assessment, energy consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
80 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: Taxi industry, decision making, recommendation system, embedding model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 391