Search results for: response time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7564

Search results for: response time

604 The Effects of Logistical Centers Realization on Society and Economy

Authors: Anna Dolinayova, Juraj Camaj, Martin Loch

Abstract:

Presently, it is necessary to ensure the sustainable development of passenger and freight transport. Increasing performance of road freight has had a negative impact to environment and society. It is therefore necessary to increase the competitiveness of intermodal transport, which is more environmentally friendly. The study describes the effectiveness of logistical centers realization for companies and society and research how the partial internalization of external costs reflected in the efficient use of these centers and increase the competitiveness of intermodal transport to road freight. In our research, we use the method of comparative analysis and market research to describe the advantages of logistic centers for their users as well as for society as a whole. Method normal costing is used for calculation infrastructure and total costs, method of conversion costing for determine the external costs. We modelled total society costs for road freight transport and inter modal transport chain (we assumed that most of the traffic is carried by rail) with different loading schemes for condition in the Slovak Republic. Our research has shown that higher utilization of inter modal transport chain do good not only for society, but for companies providing freight services too. Increase in use of inter modal transport chain can bring many benefits to society that do not bring direct immediate financial return. They often bring the multiplier effects, such as greater use of environmentally friendly transport mode and reduce the total society costs.

Keywords: Delivery time, economy effectiveness, logistical centers, ecological efficiency, optimization, society.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
603 Multi-Line Flexible Alternating Current Transmission System (FACTS) Controller for Transient Stability Analysis of a Multi-Machine Power System Network

Authors: A.V.Naresh Babu, S.Sivanagaraju

Abstract:

A considerable progress has been achieved in transient stability analysis (TSA) with various FACTS controllers. But, all these controllers are associated with single transmission line. This paper is intended to discuss a new approach i.e. a multi-line FACTS controller which is interline power flow controller (IPFC) for TSA of a multi-machine power system network. A mathematical model of IPFC, termed as power injection model (PIM) presented and this model is incorporated in Newton-Raphson (NR) power flow algorithm. Then, the reduced admittance matrix of a multi-machine power system network for a three phase fault without and with IPFC is obtained which is required to draw the machine swing curves. A general approach based on L-index has also been discussed to find the best location of IPFC to reduce the proximity to instability of a power system. Numerical results are carried out on two test systems namely, 6-bus and 11-bus systems. A program in MATLAB has been written to plot the variation of generator rotor angle and speed difference curves without and with IPFC for TSA and also a simple approach has been presented to evaluate critical clearing time for test systems. The results obtained without and with IPFC are compared and discussed.

Keywords: Flexible alternating current transmission system (FACTS), first swing stability, interline power flow controller (IPFC), power injection model (PIM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157
602 Optimal Selling Prices for Small Sized Poultry Farmers

Authors: Hidefumi Kawakatsu, Dong Li, Kosuke Kato

Abstract:

In Japan, meat-type chickens are mainly classified into three categories: (1) Broilers, (2) Branded chickens, and (3) Jidori (Free-range local traditional pedigree chickens). The Jidori chickens are certified by the Japanese Ministry of Agriculture, whilst, for the Branded chickens, there is no regulation with respect to their breed (genotype) or methods for rearing them. It is, therefore, relatively easy for poultry farmers to introduce Branded than Jidori chickens. The Branded chickens are normally fed a low-calorie diet with ingredients such as herbs, which lengthens their breeding period (compared with that of the Broilers) and increases their market value. In the field of inventory management, fast-growing animals such as broilers are categorised as ameliorating items. To the best of our knowledge, there are no previous studies that have explicitly considered smaller sized poultry farmers with limited breeding areas. This study develops an inventory model for a small sized poultry farmer that produces both the Broilers (Product 1) and the Branded chickens (Product 2) with different amelioration rates. The poultry farmer’s total profit per unit of time is formulated as a function of selling prices by using a price-dependent demand function. The existence of a unique optimal selling price for each product, which maximises the total profit, established. It has also been confirmed through numerical examples that, when the breeding area is fixed, the total profit could increase if the poultry farmer reduced the product quantity of Product 1 to introduce Product 2.

Keywords: Amelioration, deterioration, small sized poultry farmers, optimal price.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778
601 Bone Mineral Density and Trabecular Bone Score in Ukrainian Men with Obesity

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk

Abstract:

Osteoporosis and obesity are widespread diseases in people over 50 years associated with changes in structure and body composition. Нigher body mass index (BMI) values are associated with greater bone mineral density (BMD). However, trabecular bone score (TBS) indirectly explores bone quality, independently of BMD. The aim of our study was to evaluate the relationship between the BMD and TBS parameters in Ukrainian men suffering from obesity. We examined 396 men aged 40-89 years. Depending on their BMI all the subjects were divided into two groups: Group I – patients with obesity whose BMI was ≥ 30 kg/m2 (n=129) and Group II – patients without obesity and BMI of < 30 kg/m2 (n=267). The BMD of total body, lumbar spine L1-L4, femoral neck and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on DXA machine (product of Med-Imaps, Pessac, France). In general, obese men had a significantly higher BMD of lumbar spine L1-L4, femoral neck, total body and ultradistal forearm (p < 0.001) in comparison with men without obesity. The TBS of L1-L4 was significantly lower in obese men compared to non-obese ones (p < 0.001). BMD of lumbar spine L1-L4, femoral neck and total body significantly differ in men aged 40-49, 50-59, 60-69, and 80-89 years (p < 0.05). At the same time, in men aged 70-79 years, BMD of lumbar spine L1-L4 (p=0.46), femoral neck (p=0.18), total body (p=0.21), ultra-distal forearm (p=0.13), and TBS (p=0.07) did not significantly differ. A significant positive correlation between the fat mass and the BMD at different sites was observed. However, the correlation between the fat mass and TBS of L1-L4 was also significant, though negative.

Keywords: Bone mineral density, trabecular bone score, obesity, men.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067
600 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: Block matching, digital evidence, hash list.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
599 Comparison of Microwave-Assisted and Conventional Leaching for Extraction of Copper from Chalcopyrite Concentrate

Authors: Ayfer Kilicarslan, Kubra Onol, Sercan Basit, Muhlis Nezihi Saridede

Abstract:

Chalcopyrite (CuFeS2) is the most common primary mineral used for the commercial production of copper. The low dissolution efficiency of chalcopyrite in sulfate media has prevented an efficient industrial leaching of this mineral in sulfate media. Ferric ions, bacteria, oxygen and other oxidants have been used as oxidizing agents in the leaching of chalcopyrite in sulfate and chloride media under atmospheric or pressure leaching conditions. Two leaching methods were studied to evaluate chalcopyrite (CuFeS2) dissolution in acid media. First, the conventional oxidative acid leaching method was carried out using sulfuric acid (H2SO4) and potassium dichromate (K2Cr2O7) as oxidant at atmospheric pressure. Second, microwave-assisted acid leaching was performed using the microwave accelerated reaction system (MARS) for same reaction media. Parameters affecting the copper extraction such as leaching time, leaching temperature, concentration of H2SO4 and concentration of K2Cr2O7 were investigated. The results of conventional acid leaching experiments were compared to the microwave leaching method. It was found that the copper extraction obtained under high temperature and high concentrations of oxidant with microwave leaching is higher than those obtained conventionally. 81% copper extraction was obtained by the conventional oxidative acid leaching method in 180 min, with the concentration of 0.3 mol/L K2Cr2O7 in 0.5M H2SO4 at 50 ºC, while 93.5% copper extraction was obtained in 60 min with microwave leaching method under same conditions.

Keywords: Extraction, copper, microwave-assisted leaching, chalcopyrite, potassium dichromate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2815
598 Non-Burn Treatment of Health Care Risk Waste

Authors: Jefrey Pilusa, Tumisang Seodigeng

Abstract:

This research discusses a South African case study for the potential of utilizing refuse-derived fuel (RDF) obtained from non-burn treatment of health care risk waste (HCRW) as potential feedstock for green energy production. This specific waste stream can be destroyed via non-burn treatment technology involving high-speed mechanical shredding followed by steam or chemical injection to disinfect the final product. The RDF obtained from this process is characterised by a low moisture, low ash, and high calorific value which means it can be potentially used as high-value solid fuel. Due to the raw feed of this RDF being classified as hazardous, the final RDF has been reported to be non-infectious and can blend with other combustible wastes such as rubber and plastic for waste to energy applications. This study evaluated non-burn treatment technology as a possible solution for on-site destruction of HCRW in South African private and public health care centres. Waste generation quantities were estimated based on the number of registered patient beds, theoretical bed occupancy. Time and motion study was conducted to evaluate the logistics viability of on-site treatment. Non-burn treatment technology for HCRW is a promising option for South Africa, and successful implementation of this method depends upon the initial capital investment, operational cost and environmental permitting of such technology; there are other influencing factors such as the size of the waste stream, product off-take price as well as product demand.

Keywords: Autoclave, disposal, fuel, incineration, medical waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
597 Modal Analysis of Machine Tool Column Using Finite Element Method

Authors: Migbar Assefa

Abstract:

The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.

Keywords: Finite Element Modeling, Modal Analysis, Machine tool structure, Static Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4997
596 Thermal Method for Testing Small Chemisorbents Samples on the Base of Potassium Superoxide

Authors: Pavel V. Balabanov, Daria A. Liubimova, Aleksandr P. Savenkov

Abstract:

The increase of technogenic and natural accidents, accompanied by air pollution, for example, by combustion products, leads to the necessity of respiratory protection. This work is devoted to the development of a calorimetric method and a device which allows investigating quickly the kinetics of carbon dioxide sorption by chemisorbents on the base of potassium superoxide in order to assess the protective properties of respiratory protective closed circuit apparatus. The features of the traditional approach for determining the sorption properties in a thin layer of chemisorbent are described, as well as methods and devices, which can be used for the sorption kinetics study. The authors developed an approach (as opposed to the traditional approach) based on the power measurement of internal heat sources in the chemisorbent layer. The emergence of the heat sources is a result of exothermic reaction of carbon dioxide sorption. This approach eliminates the necessity of chemical analysis of samples and can significantly reduce the time and material expenses during chemisorbents testing. Error of determining the volume fraction of adsorbed carbon dioxide by the developed method does not exceed 12%. Taking into account the efficiency of the method, we consider that it is a good alternative to traditional methods of chemical analysis under the assessment of the protection sorbents quality.

Keywords: Carbon dioxide chemisorption, exothermic reaction, internal heat sources, respiratory protective apparatus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
595 Identifying the Barriers behind the Lack of Six Sigma Use in Libyan Manufacturing Companies

Authors: Osama Elgadi, Martin Birkett, Wai Ming Cheung

Abstract:

This paper investigates the barriers behind the underutilisation of six sigma in Libyan manufacturing companies (LMCs). A mixed-method methodology is proposed, starting by conducting interviews to collect qualitative data followed by the development of a questionnaire to obtain quantitative data. The focus of this paper is on discussing the findings of the interview stage and how these can be used to further develop the questionnaire stage. The interview results showed that only four key barriers were highlighted as being encountered by LMCs. With a difference in terms of their significance, these factors were identified, and placed in descending order according to their importance, namely: “Lack of top management commitment”, “Lack of training”, “Lack of knowledge about six sigma”, and “Culture effect”. The findings also showed that some barriers which, were found in previous studies of six sigma implementation were not considered as barriers to LMCs but can, in fact, be considered as success factors or enablers for six sigma adoption. These factors were identified as: “sufficiency of time and financial resources”; “customers unsatisfied”; “good communication between all departments in the company”; “we are certain about its results and benefits to our company and unhappy with the current quality system”. These results suggest that LMCs face fewer barriers to adopting six sigma than many well-established global companies operating in other countries and could take advantage of these successful factors by developing and implementing a six sigma framework to improve their product quality and competitiveness.

Keywords: Six sigma, barriers, Libyan manufacturing companies, interview.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
594 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
593 Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand

Authors: P. Sophon, M. Kruachanta, S. Chaisri, G. Leaungvongpaisan, P. Wongpornchai

Abstract:

The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.

Keywords: Hydrocarbon indication, instantaneous seismic attribute, spectral decomposition, thin bed delineation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
592 Hydraulic Conductivity Prediction of Cement Stabilized Pavement Base Incorporating Recycled Plastics and Recycled Aggregates

Authors: Md. Shams Razi Shopnil, Tanvir Imtiaz, Sabrina Mahjabin, Md. Sahadat Hossain

Abstract:

Saturated hydraulic conductivity is one of the most significant attributes of pavement base course. Determination of hydraulic conductivity is a routine procedure for regular aggregate base courses. However, in many cases, a cement-stabilized base course is used with compromised drainage ability. Traditional hydraulic conductivity testing procedure is a readily available option which leads to two consequential drawbacks, i.e., the time required for the specimen to be saturated and extruding the sample after completion of the laboratory test. To overcome these complications, this study aims at formulating an empirical approach to predicting hydraulic conductivity based on Unconfined Compressive Strength test results. To do so, this study comprises two separate experiments (Constant Head Permeability test and Unconfined Compressive Strength test) conducted concurrently on a specimen having the same physical credentials. Data obtained from the two experiments were then used to devise a correlation between hydraulic conductivity and unconfined compressive strength. This correlation in the form of a polynomial equation helps to predict the hydraulic conductivity of cement-treated pavement base course, bypassing the cumbrous process of traditional permeability and less commonly used horizontal permeability tests. The correlation was further corroborated by a different set of data, and it has been found that the derived polynomial equation is deemed to be a viable tool to predict hydraulic conductivity.

Keywords: Hydraulic conductivity, unconfined compressive strength, recycled plastics, recycled concrete aggregates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 269
591 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack

Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo

Abstract:

The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.

Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493
590 A Preliminary Literature Review of Digital Transformation Case Studies

Authors: Vesna Bosilj Vukšić, Lucija Ivančić, Dalia Suša Vugec

Abstract:

While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.

Keywords: Digital strategy, digital technologies, digital transformation, literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6717
589 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants

Authors: B. Mukanova, N. Glazyrina, S. Glazyrin

Abstract:

The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.

Keywords: Direct problem, multiparametric optimization, optimization parameters, water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2112
588 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
587 Exploring Students’ Self-Evaluation on Their Learning Outcomes through an Integrated Cumulative Grade Point Average Reporting Mechanism

Authors: Suriyani Ariffin, Nor Aziah Alias, Khairil Iskandar Othman, Haslinda Yusoff

Abstract:

An Integrated Cumulative Grade Point Average (iCGPA) is a mechanism and strategy to ensure the curriculum of an academic programme is constructively aligned to the expected learning outcomes and student performance based on the attainment of those learning outcomes that is reported objectively in a spider web. Much effort and time has been spent to develop a viable mechanism and trains academics to utilize the platform for reporting. The question is: How well do learners conceive the idea of their achievement via iCGPA and whether quality learner attributes have been nurtured through the iCGPA mechanism? This paper presents the architecture of an integrated CGPA mechanism purported to address a holistic evaluation from the evaluation of courses learning outcomes to aligned programme learning outcomes attainment. The paper then discusses the students’ understanding of the mechanism and evaluation of their achievement from the generated spider web. A set of questionnaires were distributed to a group of students with iCGPA reporting and frequency analysis was used to compare the perspectives of students on their performance. In addition, the questionnaire also explored how they conceive the idea of an integrated, holistic reporting and how it generates their motivation to improve. The iCGPA group was found to be receptive to what they have achieved throughout their study period. They agreed that the achievement level generated from their spider web allows them to develop intervention and enhance the programme learning outcomes before they graduate.

Keywords: Learning outcomes attainment, iCGPA, programme learning outcomes, spider web, iCGPA reporting skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
586 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
585 Convective Hot Air Drying of Different Varieties of Blanched Sweet Potato Slices

Authors: M. O. Oke, T. S. Workneh

Abstract:

Drying behavior of blanched sweet potato in a cabinet dryer using different five air temperatures (40-80°C) and ten sweet potato varieties sliced to 5mm thickness were investigated. The drying data were fitted to eight models. The Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data obtained during the drying of all the varieties while Newton (Lewis) and Wang and Singh models gave the least fit. The values of Deff obtained for Bophelo variety (1.27 x 10-9 to 1.77 x 10-9 m2/s) was the least while that of S191 (1.93 x 10-9 to 2.47 x 10-9 m2/s) was the highest which indicates that moisture diffusivity in sweet potato is affected by the genetic factor. Activation energy values ranged from 0.27-6.54 kJ/mol. The lower activation energy indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method. The drying behavior of blanched sweet potato was investigated in a cabinet dryer. Drying time decreased considerably with increase in hot air temperature. Out of the eight models fitted, the Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data on all the varieties while Newton, Wang and Singh models gave the least. The lower activation energy (0.27 - 6.54 kJ/mol) obtained indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method.

Keywords: Sweet Potato Slice, Drying Models, Moisture Ratio, Moisture Diffusivity, Activation Energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2969
584 Feasibility Study for a Castor oil Extraction Plant in South Africa

Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee

Abstract:

A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.

Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6048
583 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
582 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.

The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.

Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2523
581 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 459
580 Conceptualizing Thoughtful Intelligence for Sustainable Decision Making

Authors: Musarrat Jabeen

Abstract:

Thoughtful intelligence offers a sustainable position to enhance the influence of decision-makers. Thoughtful Intelligence implies the understanding to realize the impact of one’s thoughts, words and actions on the survival, dignity and development of the individuals, groups and nations. Thoughtful intelligence has received minimal consideration in the area of Decision Support Systems, with an end goal to evaluate the quantity of knowledge and its viability. This pattern degraded the imbibed contribution of thoughtful intelligence required for sustainable decision making. Given the concern, this paper concentrates on the question: How to present a model of Thoughtful Decision Support System (TDSS)? The aim of this paper is to appreciate the concepts of thoughtful intelligence and insinuate a Decision Support System based on thoughtful intelligence. Thoughtful intelligence includes three dynamic competencies: i) Realization about long term impacts of decisions that are made in a specific time and space, ii) A great sense of taking actions, iii) Intense interconnectivity with people and nature and; seven associate competencies, of Righteousness, Purposefulness, Understanding, Contemplation, Sincerity, Mindfulness, and Nurturing. The study utilizes two methods: Focused group discussion to count prevailing Decision Support Systems; 70% results of focus group discussions found six decision support systems and the positive inexistence of thoughtful intelligence among decision support systems regarding sustainable decision making. Delphi focused on defining thoughtful intelligence to model (TDSS). 65% results helped to conceptualize (definition and description) of thoughtful intelligence. TDSS is offered here as an addition in the decision making literature. The clients are top leaders.

Keywords: Thoughtful intelligence, Sustainable decision making, Thoughtful decision support system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 545
579 An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending

Authors: Somya Poonaya, Chawalit Thinvongpituk, Umphisak Teeboonma

Abstract:

Circular tubes have been widely used as structural members in engineering application. Therefore, its collapse behavior has been studied for many decades, focusing on its energy absorption characteristics. In order to predict the collapse behavior of members, one could rely on the use of finite element codes or experiments. These tools are helpful and high accuracy but costly and require extensive running time. Therefore, an approximating model of tubes collapse mechanism is an alternative for early step of design. This paper is also aimed to develop a closed-form solution of thin-walled circular tube subjected to bending. It has extended the Elchalakani et al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the rate of energy dissipation of rolling hinge in the circumferential direction. The 3-D geometrical collapse mechanism was analyzed by adding the oblique hinge lines along the longitudinal tube within the length of plastically deforming zone. The model was based on the principal of energy rate conservation. Therefore, the rates of internal energy dissipation were calculated for each hinge lines which are defined in term of velocity field. Inextensional deformation and perfect plastic material behavior was assumed in the derivation of deformation energy rate. The analytical result was compared with experimental result. The experiment was conducted with a number of tubes having various D/t ratios. Good agreement between analytical and experiment was achieved.

Keywords: Bending, Circular tube, Energy, Mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3469
578 Modified Energy and Link Failure Recovery Routing Algorithm for Wireless Sensor Network

Authors: M. Jayekumar, V. Nagarajan

Abstract:

Wireless sensor network finds role in environmental monitoring, industrial applications, surveillance applications, health monitoring and other supervisory applications. Sensing devices form the basic operational unit of the network that is self-battery powered with limited life time. Sensor node spends its limited energy for transmission, reception, routing and sensing information. Frequent energy utilization for the above mentioned process leads to network lifetime degradation. To enhance energy efficiency and network lifetime, we propose a modified energy optimization and node recovery post failure method, Energy-Link Failure Recovery Routing (E-LFRR) algorithm. In our E-LFRR algorithm, two phases namely, Monitored Transmission phase and Replaced Transmission phase are devised to combat worst case link failure conditions. In Monitored Transmission phase, the Actuator Node monitors and identifies suitable nodes for shortest path transmission. The Replaced Transmission phase dispatches the energy draining node at early stage from the active link and replaces it with the new node that has sufficient energy. Simulation results illustrate that this combined methodology reduces overhead, energy consumption, delay and maintains considerable amount of alive nodes thereby enhancing the network performance.

Keywords: Actuator node, energy efficient routing, energy hole, link failure recovery, link utilization, wireless sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1163
577 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis

Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan

Abstract:

Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.

Keywords: Carbon dioxide, emission modeling, light rail, microscopic model, traffic flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
576 Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
575 Authentication Protocol for Wireless Sensor Networks

Authors: Sunil Gupta, Harsh Kumar Verma, AL Sangal

Abstract:

Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.

Keywords: Authentication, Key management, Wireless Sensornetwork, Elliptic curve cryptography (ECC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3785