Search results for: operation estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4586

Search results for: operation estimation

836 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 84
835 Closed Incision Negative Pressure Therapy Dressing as an Approach to Manage Closed Sternal Incisions in High-Risk Cardiac Patients: A Multi-Centre Study in the UK

Authors: Rona Lee Suelo-Calanao, Mahmoud Loubani

Abstract:

Objective: Sternal wound infection (SWI) following cardiac operation has a significant impact on patient morbidity and mortality. It also contributes to longer hospital stays and increased treatment costs. SWI management is mainly focused on treatment rather than prevention. This study looks at the effect of closed incision negative pressure therapy (ciNPT) dressing to help reduce the incidence of superficial SWI in high-risk patients after cardiac surgery. The ciNPT dressing was evaluated at 3 cardiac hospitals in the United Kingdom". Methods: All patients who had cardiac surgery from 2013 to 2021 were included in the study. The patients were classed as high risk if they have two or more of the recognised risk factors: obesity, age above 80 years old, diabetes, and chronic obstructive pulmonary disease. Patients receiving standard dressing (SD) and patients using ciNPT were propensity matched, and the Fisher’s exact test (two-tailed) and unpaired T-test were used to analyse categorical and continuous data, respectively. Results: There were 766 matched cases in each group. Total SWI incidences are lower in the ciNPT group compared to the SD group (43 (5.6%) vs 119 (15.5%), P=0.0001). There are fewer deep sternal wound infections (14(1.8%) vs. 31(4.04%), p=0.0149) and fewer superficial infections (29(3.7%) vs. 88 (11.4%), p=0.0001) in the ciNPT group compared to the SD group. However, the ciNPT group showed a longer average length of stay (11.23 ± 13 days versus 9.66 ± 10 days; p=0.0083) and higher mean logistic EuroSCORE (11.143 ± 13 versus 8.094 ± 11; p=0.0001). Conclusion: Utilization of ciNPT as an approach to help reduce the incidence of superficial and deep SWI may be effective in high-risk patients requiring cardiac surgery.

Keywords: closed incision negative pressure therapy, surgical wound infection, cardiac surgery complication, high risk cardiac patients

Procedia PDF Downloads 75
834 Effect of Cement Amount on California Bearing Ratio Values of Different Soil

Authors: Ayse Pekrioglu Balkis, Sawash Mecid

Abstract:

Due to continued growth and rapid development of road construction in worldwide, road sub-layers consist of soil layers, therefore, identification and recognition of type of soil and soil behavior in different condition help to us to select soil according to specification and engineering characteristic, also if necessary sometimes stabilize the soil and treat undesirable properties of soils by adding materials such as bitumen, lime, cement, etc. If the soil beneath the road is not done according to the standards and construction will need more construction time. In this case, a large part of soil should be removed, transported and sometimes deposited. Then purchased sand and gravel is transported to the site and full depth filled and compacted. Stabilization by cement or other treats gives an opportunity to use the existing soil as a base material instead of removing it and purchasing and transporting better fill materials. Classification of soil according to AASHTOO system and USCS help engineers to anticipate soil behavior and select best treatment method. In this study soil classification and the relation between soil classification and stabilization method is discussed, cement stabilization with different percentages have been selected for soil treatment based on NCHRP. There are different parameters to define the strength of soil. In this study, CBR will be used to define the strength of soil. Cement by percentages, 0%, 3%, 7% and 10% added to soil for evaluation effect of added cement to CBR of treated soil. Implementation of stabilization process by different cement content help engineers to select an economic cement amount for the stabilization process according to project specification and characteristics. Stabilization process in optimum moisture content (OMC) and mixing rate effect on the strength of soil in the laboratory and field construction operation have been performed to see the improvement rate in strength and plasticity. Cement stabilization is quicker than a universal method such as removing and changing field soils. Cement addition increases CBR values of different soil types by the range of 22-69%.

Keywords: California Bearing Ratio, cement stabilization, clayey soil, mechanical properties

Procedia PDF Downloads 380
833 Human Rabies Survivors in India: Epidemiological, Immunological and Virological Studies

Authors: Madhusudana S. N., Reeta Mani, Ashwini S. Satishchandra P., Netravati, Udhani V., Fiaz A., Karande S.

Abstract:

Rabies is an acute encephalitis which is considered 100% fatal despite occasional reports of survivors. However, in recent times more cases of human rabies survivors are being reported. In the last 5 years, there are six laboratories confirmed human rabies survivors in India alone. All cases were children below 15 years and all contracted the disease by dog bites. All of them also had received the full or partial course of rabies vaccination and 4 out of 6 had also received rabies immunoglobulin. All cases were treated in intensive care units in hospitals at Bangalore, Mumbai, Chandigarh, Lucknow and Goa. We report here the results of immunological and virological studies conducted at our laboratory on these patients. The clinical samples that were obtained from these patients were Serum, CSF, nuchal skin biopsy and saliva. Serum and CSF samples were subjected to standard RFFIT for estimation of rabies neutralizing antibodies. Skin biopsy, CSF and saliva were processed by TaqMan real-time PCR for detection of viral RNA. CSF, saliva and skin homogenates were also processed for virus isolation by inoculation of suckling mice. The PBMCs isolated from fresh blood was subjected to ELISPOT assay to determine the type of immune response (Th1/Th2). Both CSF and serum were also investigated for selected cytokines by Luminex assay. The level of antibodies to virus G protein and N protein were determined by ELISA. All survivors had very high titers of RVNA in serum and CSF 100 fold higher than non-survivors and vaccine controls. A five-fold rise in titer could be demonstrated in 4 out of 6 patients. All survivors had a significant increase in antibodies to G protein in both CSF and serum when compared to non-survivors. There was a profound and robust Th1 response in all survivors indicating that interferon gamma could play an important factor in virus clearance. We could isolate viral RNA in only one patient four years after he had developed symptoms. The partial N gene sequencing revealed 99% homology to species I strain prevalent in India. Levels of selected cytokines in CSF and serum did not reveal any difference between survivors and non-survivors. To conclude, survival from rabies is mediated by virus-specific immune responses of the host and clearance of rabies virus from CNS may involve the participation of both Th2 and Th1 immune responses.

Keywords: rabies, rabies treatment, rabies survivors, immune reponse in rabies encephalitis

Procedia PDF Downloads 317
832 Towards a Doughnut Economy: The Role of Institutional Failure

Authors: Ghada El-Husseiny, Dina Yousri, Christian Richter

Abstract:

Social services are often characterized by market failures, which justifies government intervention in the provision of these services. It is widely acknowledged that government intervention breeds corruption since resources are being transferred from one party to another. However, what is still being extensively studied is the magnitude of the negative impact of corruption on publicly provided services and development outcomes. Corruption has the power to hinder development and cripple our march towards the Sustainable Development Goals. Corruption diminishes the efficiency and effectiveness of public health and education spending and directly impacts the outcomes of these sectors. This paper empirically examines the impact of Institutional Failure on public sector services provision, with the sole purpose of studying the impact of corruption on SDG3 and 4; Good health and wellbeing and Quality education, respectively. The paper explores the effect of corruption on these goals from various perspectives and extends the analysis by examining if the impact of corruption on these goals differed when it accounted for the current corruption state. Using Pooled OLS(Ordinary Least Square) and Fixed effects panel estimation on 22 corrupt and 22 clean countries between 2000 and 2017. Results show that corruption in both corrupt and clean countries has a more severe impact on Health than the Education sector. In almost all specifications, corruption has an insignificant effect on School Enrollment rates but a significant effect on Infant Mortality rates. Results further indicate that, on average, a 1 point increase in the CPI(Consumer Price Index) can increase health expenditures by 0.116% in corrupt and clean countries. However, the fixed effects model indicates that the way Health and Education expenditures are determined in clean and corrupt countries are completely country-specific, in which corruption plays a minimal role. Moreover, the findings show that School Enrollment rates and Infant Mortality rates depend, to a large extent, on public spending. The most astounding results-driven is that corrupt countries, on average, have more effective and efficient healthcare expenditures. While some insights are provided as to why these results prevail, they should be further researched. All in all, corruption impedes development outcomes, and any Anti-corrupt policies taken will bring forth immense improvements and speed up the march towards sustainability.

Keywords: corruption, education, health, public spending, sustainable development

Procedia PDF Downloads 155
831 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques

Authors: Umit Cali

Abstract:

The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.

Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids

Procedia PDF Downloads 500
830 Transient Phenomena in a 100 W Hall Thrusters: Experimental Measurements of Discharge Current and Plasma Parameter Evolution

Authors: Clémence Royer, Stéphane Mazouffre

Abstract:

Nowadays, electric propulsion systems play a crucial role in space exploration missions due to their high specific impulse and long operational life. The Hall thrusters are one of the most mature EP technologies. It is a gridless ion thruster that has proved reliable and high-performance for decades in various space missions. Operation of HT relies on electron emissions through a cathode placed outside a hollow dielectric channel that includes an anode at the back. Negatively charged particles are trapped in a magnetic field and efficiently slow down. By collisions, the electron cloud ionizes xenon atoms. A large electric field is generated in the axial direction due to the low electron transverse mobility in the region of a strong magnetic field. Positive particles are pulled out of the chamber at high velocity and are neutralized directly at the exhaust area. This phenomenon leads to the acceleration of the spacecraft system at a high specific impulse. While HT’s architecture and operating principle are relatively simple, the physics behind thrust is complex and still partly unknown. Current and voltage oscillations, as well as electron properties, have been captured over a 30 mn time period after ignition. The observed low-frequency oscillations exhibited specific frequency ranges, amplitudes, and stability patterns. Correlations between the oscillations and plasma characteristics we analyzed. The impact of these instabilities on thruster performance, including thrust efficiency, has been evaluated as well. Moreover, strategies for mitigating and controlling these instabilities have been developed, such as filtering. In this contribution, in addition to presenting a summary of the results obtained in the transient regime, we will present and discuss recent advances in Hall thruster plasma discharge filtering and control.

Keywords: electric propulsion, Hall Thruster, plasma diagnostics, low-frequency oscillations

Procedia PDF Downloads 74
829 Dielectric Response Analysis Measurement for Diagnostic Oil-Paper Insulation System on Aged Inter Bus Transformer 3x10 MVA

Authors: Eki Farlen, Akas

Abstract:

Condition assessment of oil-paper-insulated power transformers, particularly of water content, is becoming increasingly important for aged transformers. As insulation ages, it can produce water, which reduces its dielectric strength, accelerates the cellulose ageing process, and causes gas bubbles to form at high temperatures. This paper mainly assesses the life condition of oil-paper insulation system of Inter Bus Transformer (IBT) 30 MVA, 150/30 kV in PT PLN-Substation Jelok that has been operating for 41 years, since 1974. Valuable information about the condition of high voltage insulation may be obtained by measuring its dielectric response. This paper describes in detail the interpretation of Dielectric Response Analysis (DIRANA) measurements and the test result compared to other insulation tests to get deep information for diagnostic, such as Tan delta test, oil characteristic test and Dissolve Gas Analysis (DGA) test. This paper mainly discusses the parameter relationship between moisture content, water content, acidity, oil conductivity and dissipation factor. The result and analysis show that IBT 30 MVA Jelok phase U and W had just been ageing due to high acidity level (>0.2 mgKOH/g) which cause high moisture in cellulose/paper (%) are in wet category about 4.7% and 5% and water content in oil (ppm) about 3.13 ppm and 3.33 ppm at temperature 20°C. High acidity level can make oxidation process and produce water in paper and particle which can decrease the value of Interfacial Tension (IFT) below 22 mN/m (poor category) for both phase U and W. Even if paper insulation of transformer are in wet condition, dissipation factor and capacitance at the same frequency (50 Hz) from both measurement DIRANA test and Tangent delta test give the same result (almost), the results are 0.69% and 0.71% (<1%), it may be acceptable and should not be investigated. The DGA results show that TDCG are in level one (1) condition and there are no found a Key Gases, it means that transformers had no failure during operation like arching, partial discharge and thermal in oil or cellulose.

Keywords: diagnostic, inter-bus transformer, oil-paper insulation, moisture, dissipation factor

Procedia PDF Downloads 265
828 Application of Integrated Marketing Communications-Multiple, Case Studies

Authors: Yichen Lin, Hsiao-Han Chen, Chi-Chen Jan

Abstract:

Since 1990, the research area of Integrated Marketing Communications (IMC) has been presented from a different perspective. With advances in information technology and the rise of consumer consciousness, businesses are in a competitive environment. There is an urgent need to adopt more profitable and effective integrated marketing strategies to increase core competitiveness. The goal of the company's sustainable management is to increase consumers' willingness to purchase and to maximize profits. This research uses six aspects of IMC, which includes awareness integration, unified image, database integration, customer-based integration, stakeholders-based integration, and evaluation integration to examine the role of marketing strategies in the strengths and weaknesses of the six components of integrated marketing communications, their effectiveness, the most important components and the most important components that need improvement. At the same time, social media such as FaceBook, Instagram, Youtube, Line, or even TikTok have become marketing tools which firms adopt them more and more frequently in the marketing strategy. In the end of 2019, the outbreak of COVID-19 did really affect the global industries. Lockdown policies also accelerated closure of brick-mentor stores worldwide. Online purchases rose dramatically. Hence, the effectiveness of online marketing will be essential to maintain the business. This study uses multiple-case studies to extend the effects of social media and IMC. Moreover, the study would also explore the differences of social media and IMC during COVID-19. Through literature review and multiple-case studies, it is found that using social media combined with IMC did really help companies expand their business and make good connections with stakeholders. One of previous studies also used system theory to explore the interrelationship among Integrated Marketing Communication, collaborative marketing, and global brand building. Even during pandemic, firms could still maintain the operation and connect with their customers more tightly.

Keywords: integration marketing communications, multiple-case studies, social media, system theory

Procedia PDF Downloads 207
827 Optimization of Mechanical Cacao Shelling Parameters Using Unroasted Cocoa Beans

Authors: Jeffrey A. Lavarias, Jessie C. Elauria, Arnold R. Elepano, Engelbert K. Peralta, Delfin C. Suministrado

Abstract:

Shelling process is one of the primary processes and critical steps in the processing of chocolate or any product that is derived from cocoa beans. It affects the quality of the cocoa nibs in terms of flavor and purity. In the Philippines, small-scale food processor cannot really compete with large scale confectionery manufacturers because of lack of available postharvest facilities that are appropriate to their level of operation. The impact of this study is to provide the needed intervention that will pave the way for cacao farmers of engaging on the advantage of value-adding as way to maximize the economic potential of cacao. Thus, provision and availability of needed postharvest machines like mechanical cacao sheller will revolutionize the current state of cacao industry in the Philippines. A mechanical cacao sheller was developed, fabricated, and evaluated to establish optimum shelling conditions such as moisture content of cocoa beans, clearance where of cocoa beans passes through the breaker section and speed of the breaking mechanism on shelling recovery, shelling efficiency, shelling rate, energy utilization and large nib recovery; To establish the optimum level of shelling parameters of the mechanical sheller. These factors were statistically analyzed using design of experiment by Box and Behnken and Response Surface Methodology (RSM). By maximizing shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization, the optimum shelling conditions were established at moisture content, clearance and breaker speed of 6.5%, 3 millimeters and 1300 rpm, respectively. The optimum values for shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization were recorded at 86.51%, 99.19%, 21.85kg/hr, 89.75%, and 542.84W, respectively. Experimental values obtained using the optimum conditions were compared with predicted values using predictive models and were found in good agreement.

Keywords: cocoa beans, optimization, RSM, shelling parameters

Procedia PDF Downloads 336
826 Argumentation Frameworks and Theories of Judging

Authors: Sonia Anand Knowlton

Abstract:

With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.

Keywords: computer science, mathematics, law, legal theory, judging

Procedia PDF Downloads 45
825 A Fluid-Walled Microfluidic Device for Cell Migration Studies

Authors: Cyril Deroy, Agata Rumianek, David R. Greaves, Peter R. Cook, Edmond J. Walsh

Abstract:

Various microfluidic platforms have been developed in the past couple of decades offering experimental methods for the study of cell migration; however, their implementation in the laboratory has remained limited. Some reasons cited for the lack of uptake include the technical complexity of the devices, high failure rate associated with gas-bubbles, biocompatibility concerns with the use of polydimethylsiloxane (PDMS) and equipment/time/expertise requirements for operation and manufacture. As sample handling remains challenging due to the closed format of microfluidic devices, open microfluidic systems have been developed offering versatility and simplicity of use. Rather than confining fluids by solid walls, samples can be accessed directly over the open platform, by removing at least one of the solid boundaries, such as the cover. In this paper, a method for the fabrication of open fluid-walled microfluidic circuits for cell migration studies is introduced, where only materials commonly used by the life-science community are required; tissue culture dishes and cell media. The simplicity of the method, and ability to retrieve cells of interest are two key features of the method. Both passive and active flow-devices can be created in this way. To demonstrate the versatility of the method a cell migration assay is performed, which requires fabricating circuits for establishing chemical gradients, loading cells and incubating, creating chemical gradients, real time imaging of cell migration and finally retrieval of cells. The open architecture has high fidelity as it eliminates air bubble related failures and enables the precise control of gradients. The ability to fabricate custom microfluidic designs in minutes should make this method suitable for use in a wide range of cell migration studies.

Keywords: chemotaxis, fluid walls, gradient generation, open microfluidics

Procedia PDF Downloads 132
824 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 268
823 Financial Performance Model of Local Economic Enterprises in Matalam, Cotabato

Authors: Kristel Faye Tandog

Abstract:

The State Owned Enterprise (SOE) or also called Public Enterprise (PE) has been playing a vital role in a country’s social and economic development. Following this idea, this study focused on the Factor Structures of Financial Performance of the Local Economic Enterprises (LEEs) namely: Food Court, Market, Slaughterhouse, and Terminal in Matalam, Cotabato. It aimed to determine the profile of the LEEs in terms of organizational structure, manner of creation, years in operation, source of initial operating requirements, annual operating budget, geographical location, and size or description of the facility. This study also included the different financial ratios of LEE that covered a five year period from Calendar Year 2009 to 2013. Primary data using survey questionnaire was administered to 468 respondents and secondary data were sourced out from the government archives and financial documents of the said LGU. There were 12 dominant factors identified namely: “management”, “enforcement of laws”, “strategic location”, “existence of non-formal competitors”, “proper maintenance”, “pricing”, “customer service”, “collection process”, “rentals and services”, “efficient use of resources”, “staffing”, and “timeliness and accuracy”. On the other hand, the financial performance of the LEE of Matalam, Cotabato using financial ratios needs reformatting. This denotes that refinement as to the following ratios: Cash Flow Indicator, Activity, Profitability and Growth is necessary. The cash flow indicator ratio showed difficulty in covering its debts in successive years. Likewise, the activity ratios showed that the LEE had not been effective in putting its investment at work. Moreover, profitability ratios revealed that it had operated in minimum capacity and had incurred net losses and thus, it had a weak profit performance. Furthermore, growth ratios showed that LEE had a declining growth trend particularly in net income.

Keywords: factor structures, financial performance, financial ratios, state owned enterprises

Procedia PDF Downloads 241
822 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery

Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi

Abstract:

Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.

Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network

Procedia PDF Downloads 60
821 Numerical Analysis of Gas-Particle Mixtures through Pipelines

Authors: G. Judakova, M. Bause

Abstract:

The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.

Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing

Procedia PDF Downloads 298
820 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density

Authors: Lalit Kumar, Rashid Al Shidi

Abstract:

Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.

Keywords: dubas bug, date palm, tree density, infestation levels

Procedia PDF Downloads 172
819 Investigation of User Position Accuracy for Stand-Alone and Hybrid Modes of the Indian Navigation with Indian Constellation Satellite System

Authors: Naveen Kumar Perumalla, Devadas Kuna, Mohammed Akhter Ali

Abstract:

Satellite Navigation System such as the United States Global Positioning System (GPS) plays a significant role in determining the user position. Similar to that of GPS, Indian Regional Navigation Satellite System (IRNSS) is a Satellite Navigation System indigenously developed by Indian Space Research Organization (ISRO), India, to meet the country’s navigation applications. This system is also known as Navigation with Indian Constellation (NavIC). The NavIC system’s main objective, is to offer Positioning, Navigation and Timing (PNT) services to users in its two service areas i.e., covering the Indian landmass and the Indian Ocean. Six NavIC satellites are already deployed in the space and their receivers are in the performance evaluation stage. Four NavIC dual frequency receivers are installed in the ‘Advanced GNSS Research Laboratory’ (AGRL) in the Department of Electronics and Communication Engineering, University College of Engineering, Osmania University, India. The NavIC receivers can be operated in two positioning modes: Stand-alone IRNSS and Hybrid (IRNSS+GPS) modes. In this paper, analysis of various parameters such as Dilution of Precision (DoP), three Dimension (3D) Root Mean Square (RMS) Position Error and Horizontal Position Error with respect to Visibility of Satellites is being carried out using the real-time IRNSS data, obtained by operating the receiver in both positioning modes. Two typical days (6th July 2017 and 7th July 2017) are considered for Hyderabad (Latitude-17°24'28.07’N, Longitude-78°31'4.26’E) station are analyzed. It is found that with respect to the considered parameters, the Hybrid mode operation of NavIC receiver is giving better results than that of the standalone positioning mode. This work finds application in development of NavIC receivers for civilian navigation applications.

Keywords: DoP, GPS, IRNSS, GNSS, position error, satellite visibility

Procedia PDF Downloads 192
818 High Altitude Glacier Surface Mapping in Dhauliganga Basin of Himalayan Environment Using Remote Sensing Technique

Authors: Aayushi Pandey, Manoj Kumar Pandey, Ashutosh Tiwari, Kireet Kumar

Abstract:

Glaciers play an important role in climate change and are sensitive phenomena of global climate change scenario. Glaciers in Himalayas are unique as they are predominantly valley type and are located in tropical, high altitude regions. These glaciers are often covered with debris which greatly affects ablation rate of glaciers and work as a sensitive indicator of glacier health. The aim of this study is to map high altitude Glacier surface with a focus on glacial lake and debris estimation using different techniques in Nagling glacier of dhauliganga basin in Himalayan region. Different Image Classification techniques i.e. thresholding on different band ratios and supervised classification using maximum likelihood classifier (MLC) have been used on high resolution sentinel 2A level 1c satellite imagery of 14 October 2017.Here Near Infrared (NIR)/Shortwave Infrared (SWIR) ratio image was used to extract the glaciated classes (Snow, Ice, Ice Mixed Debris) from other non-glaciated terrain classes. SWIR/BLUE Ratio Image was used to map valley rock and Debris while Green/NIR ratio image was found most suitable for mapping Glacial Lake. Accuracy assessment was performed using high resolution (3 meters) Planetscope Imagery using 60 stratified random points. The overall accuracy of MLC was 85 % while the accuracy of Band Ratios was 96.66 %. According to Band Ratio technique total areal extent of glaciated classes (Snow, Ice ,IMD) in Nagling glacier was 10.70 km2 nearly 38.07% of study area comprising of 30.87 % Snow covered area, 3.93% Ice and 3.27 % IMD covered area. Non-glaciated classes (vegetation, glacial lake, debris and valley rock) covered 61.93 % of the total area out of which valley rock is dominant with 33.83% coverage followed by debris covering 27.7 % of the area in nagling glacier. Glacial lake and Debris were accurately mapped using Band ratio technique Hence, Band Ratio approach appears to be useful for the mapping of debris covered glacier in Himalayan Region.

Keywords: band ratio, Dhauliganga basin, glacier mapping, Himalayan region, maximum likelihood classifier (MLC), Sentinel-2 satellite image

Procedia PDF Downloads 212
817 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 269
816 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 224
815 Application of Homer Optimization to Investigate the Prospects of Hybrid Renewable Energy System in Rural Area: Case of Rwanda

Authors: Emile Niringiyimana, LI Ji Qing, Giovanni Dushimimana, Virginie Umwere

Abstract:

The development and utilization of renewable energy (RE) can not only effectively reduce carbon dioxide (CO2) emissions, but also became a solution to electricity shortage mitigation in rural areas. Hybrid RE systems are promising ways to provide consistent and continuous power for isolated areas. This work investigated the prospect and cost effectiveness of hybrid system complementarity between a 100kW solar PV system and a small-scale 200kW hydropower station in the South of Rwanda. In order to establish the optimal size of a RE system with adequate sizing of system components, electricity demand, solar radiation, hydrology, climate data are utilized as system input. The average daily solar radiation in Rukarara is 5.6 kWh/m2 and average wind speed is 3.5 m/s. The ideal integrated RE system, according to Homer optimization, consists of 91.21kW PV, 146kW hydropower, 12 x 24V li-ion batteries with a 20kW converter. The method of enhancing such hybrid systems control, sizing and choice of components is to reduce the Net present cost (NPC) of the system, unmet load, the cost of energy and reduction of CO2. The power consumption varies according to dominant source of energy in the system by controlling the energy compensation depending on the generation capacity of each power source. The initial investment of the RE system is $977,689.25, and its operation and maintenance expenses is $142,769.39 over a 25-year period. Although the investment is very high, the targeted profits in future are huge, taking into consideration of high investment in rural electrification structure implementations, tied with an increase of electricity cost and the 5 years payback period. The study outcomes suggest that the standalone hybrid PV-Hydropower system is feasible with zero pollution in Rukara community.

Keywords: HOMER optimization, hybrid power system, renewable energy, NPC and solar pv systems

Procedia PDF Downloads 43
814 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity

Authors: Vahid Ebrahimipour

Abstract:

Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.

Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation

Procedia PDF Downloads 94
813 The Role of Building Services in Energy Conservation into Residential Buildings

Authors: Osama Ahmed Ibrahim Masoud, Mohamed Ibrahim Mohamed Abdelhadi, Ahmed Mohamed Seddik Hassan

Abstract:

The problem of study focuses on thermal comfort realization in a residential building during hot and dry climate periods consumes a major electrical energy for air conditioning operation. Thermal comfort realization in a residential building during such climate becomes more difficult regarding the phenomena of climate change, and the use of building and construction materials which have the feature of heat conduction as (bricks-reinforced concrete) and the global energy crises. For that, this study aims to how to realize internal thermal comfort through how to make the best use of building services (temporarily used service spaces) for reducing the electrical energy transfer and saving self-shading. In addition, the possibility of reduction traditional energy (fossil fuel) consumed in cooling through the use of building services for reducing the internal thermal comfort and the relationship between them. This study is based on measuring the consumed electrical energy rate in cooling (by using Design-Builder program) for a residential building (the place of study is: Egypt- Suez Canal- Suez City), this design model has lots of alternatives designs for the place of building services (center of building- the eastern front- southeastern front- the southern front- the south-west front, the western front). The building services are placed on the fronts with different rates for determining the best rate on fronts which realizes thermal comfort with the lowest of energy consumption used in cooling. Findings of the study indicate to that the best position for building services is on the west front then the south-west front, and the more the building services increase, the more energy consumption used in cooling of residential building decreases. Recommendations indicate to the need to study the building services positions in the new projects progress to select the best alternatives to realize ‘Energy conservation’ used in cooling or heating into the buildings in general, residential buildings particularly.

Keywords: residential buildings, energy conservation, thermal comfort, building services, temporary used service spaces, DesignBuilder

Procedia PDF Downloads 266
812 Structural Design of a Relief Valve Considering Strength

Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee

Abstract:

A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.

Keywords: relief valve, structural analysis, structural design, strength, safety factor

Procedia PDF Downloads 288
811 The Use of Additives to Prevent Fouling in Polyethylene and Polypropylene Gas and Slurry Phase Processes

Authors: L. Shafiq, A. Rigby

Abstract:

All polyethylene processes are highly exothermic, and the safe removal of the heat of reaction is a fundamental issue in the process design. In slurry and gas processes, the velocity of the polymer particles in the reactor and external coolers can be very high, and under certain conditions, this can lead to static charging of these particles. Such static charged polymer particles may start building up on the reactor wall, limiting heat transfer, and ultimately leading to severe reactor fouling and forced reactor shut down. Statsafe™ is an FDA approved anti-fouling additive currently used around the world for polyolefin production as an anti-fouling additive. The unique polymer chemistry aids static discharge, which prevents the build-up of charged polyolefin particles, which could lead to fouling. Statsafe™ is being used and trailed in gas, slurry, and a combination of these technologies around the world. We will share data to demonstrate how the use of Statsafe™ allows more stable operation at higher solids level by eliminating static, which would otherwise prevent closer packing of particles in the hydrocarbon slurry. Because static charge generation depends also on the concentration of polymer particles in the slurry, the maximum slurry concentration can be higher when using Statsafe™, leading to higher production rates. The elimination of fouling also leads to less downtime. Special focus will be made on the impact anti-static additives have on catalyst performance within the polymerization process and how this has been measured. Lab-scale studies have investigated the effect on the activity of Ziegler Natta catalysts when anti-static additives are used at various concentrations in gas and slurry, polyethylene and polypropylene processes. An in-depth gas phase study investigated the effect of additives on the final polyethylene properties such as particle size, morphology, fines, bulk density, melt flow index, gradient density, and melting point.

Keywords: anti-static additives, catalyst performance, FDA approved anti-fouling additive, polymerisation

Procedia PDF Downloads 184
810 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm

Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim

Abstract:

Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.

Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization

Procedia PDF Downloads 64
809 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 363
808 Low Pricing Strategy of Forest Products in Community Forestry Program: Subsidy to the Forest Users or Loss of Economy?

Authors: Laxuman Thakuri

Abstract:

Community-based forest management is often glorified as one of the best forest management alternatives in the developing countries like Nepal. It is also believed that the transfer of forest management authorities to local communities is decisive to take efficient decisions, maximize the forest benefits and improve the people’s livelihood. The community forestry of Nepal also aims to maximize the forest benefits; share them among the user households and improve their livelihood. However, how the local communities fix the price of forest products and local pricing made by the forest user groups affects to equitable forest benefits-sharing among the user households and their livelihood improvement objectives, the answer is largely silent among the researchers and policy-makers alike. This study examines local pricing system of forest products in the lowland community forestry and its effects on equitable benefit-sharing and livelihood improvement objectives. The study discovered that forest user groups fixed the price of forest products based on three criteria: i) costs incur in harvesting, ii) office operation costs, and iii) livelihood improvement costs through community development and income generating activities. Since user households have heterogeneous socio-economic conditions, the forest user groups have been applied low pricing strategy even for high-value forest products that the access of socio-economically worse-off households can be increased. However, the results of forest products distribution showed that as a result of low pricing strategy the access of socio-economically better-off households has been increasing at higher rate than worse-off and an inequality situation has been created. Similarly, the low pricing strategy is also found defective to livelihood improvement objectives. The study suggests for revising the forest products pricing system in community forest management and reforming the community forestry policy as well.

Keywords: community forestry, forest products pricing, equitable benefit-sharing, livelihood improvement, Nepal

Procedia PDF Downloads 284
807 Intrusion Detection in SCADA Systems

Authors: Leandros A. Maglaras, Jianmin Jiang

Abstract:

The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.

Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection

Procedia PDF Downloads 528