Search results for: time lag
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17790

Search results for: time lag

14550 Optimization and Automation of Functional Testing with White-Box Testing Method

Authors: Reyhaneh Soltanshah, Hamid R. Zarandi

Abstract:

In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.

Keywords: embedded software, reduce costs, software testing, white-box testing

Procedia PDF Downloads 27
14549 PET Image Resolution Enhancement

Authors: Krzysztof Malczewski

Abstract:

PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.

Keywords: PET, super-resolution, image reconstruction, pattern recognition

Procedia PDF Downloads 354
14548 UV-Cured Coatings Based on Acrylated Epoxidized Soybean Oil and Epoxy Carboxylate

Authors: Alaaddin Cerit, Suheyla Kocaman, Ulku Soydal

Abstract:

During the past two decades, photoinitiated polymerization has been attracting a great interest in terms of scientific and industrial activity. The wide recognition of UV treatment in the polymer industry results not only from its many practical applications but also from its advantage for low-cost processes. Unlike most thermal curing systems, radiation-curable systems can polymerize at room temperature without additional heat, and the curing is completed in a very short time. The advantage of cationic UV technology is that post-cure can continue in the ‘dark’ after radiation. In this study, bio-based acrylated epoxidized soybean oil (AESO) was cured with UV radiation using radicalic photoinitiator Irgacure 184. Triarylsulphonium hexafluoroantimonate was used as cationic photoinitiator for curing of 3,4-epoxycyclohexylmethyl-3,4-epoxycyclohexanecarboxylate. The effect of curing time and the amount of initiators on the curing degree and thermal properties were investigated. The thermal properties of the coating were analyzed after crosslinking UV irradiation. The level of crosslinking in the coating was evaluated by FTIR analysis. Cationic UV-cured coatings demonstrated excellent adhesion and corrosion resistance properties. Therefore, our study holds a great potential with its simple and low-cost applications.

Keywords: acrylated epoxidized soybean oil, epoxy carboxylate, thermal properties, uv-curing

Procedia PDF Downloads 246
14547 Water-in-Diesel Fuel Nanoemulsions Prepared by Modified Low Energy: Emulsion Drop Size and Stability, Physical Properties, and Emission Characteristics

Authors: M. R. Noor El-Din, Marwa R. Mishrif, R. E. Morsi, E. A. El-Sharaky, M. E. Haseeb, Rania T. M. Ghanem

Abstract:

This paper studies the physical and rheological behaviours of water/in/diesel fuel nanoemulsions prepared by modified low energy method. Twenty of water/in/diesel fuel nanoemulsions were prepared using mixed nonionic surfactants of sorbitan monooleate and polyoxyethylene sorbitan trioleate (MTS) at Hydrophilic-Lipophilic Balance (HLB) value of 10 and a working temperature of 20°C. The influence of the prepared nanoemulsions on the physical properties such as kinematic viscosity, density, and calorific value was studied. Also, nanoemulsion systems were subjected to rheological evaluation. The effect of water loading percentage (5, 6, 7, 8, 9 and 10 wt.%) on rheology was assessed at temperatures range from 20 to 60°C with temperature interval of 10 for time lapse 0, 1, 2 and 3 months, respectively. Results show that all of the sets nanoemulsions exhibited a Newtonian flow character of low-shear viscosity in the range of 132 up to 191 1/s, and followed by a shear-thinning region with yield value (Non-Newtonian behaviour) at high shear rate for all water ratios (5 to 10 wt.%) and at all test temperatures (20 to 60°C) for time ageing up to 3 months. Also, the viscosity/temperature relationship of all nanoemulsions fitted well Arrhenius equation with high correlation coefficients that ascertain their Newtonian behavior.

Keywords: alternative fuel, nanoemulsion, surfactant, diesel fuel

Procedia PDF Downloads 298
14546 Optimization of Coefficients of Fractional Order Proportional-Integrator-Derivative Controller on Permanent Magnet Synchronous Motors Using Particle Swarm Optimization

Authors: Ali Motalebi Saraji, Reza Zarei Lamuki

Abstract:

Speed control and behavior improvement of permanent magnet synchronous motors (PMSM) that have reliable performance, low loss, and high power density, especially in industrial drives, are of great importance for researchers. Because of its importance in this paper, coefficients optimization of proportional-integrator-derivative fractional order controller is presented using Particle Swarm Optimization (PSO) algorithm in order to improve the behavior of PMSM in its speed control loop. This improvement is simulated in MATLAB software for the proposed optimized proportional-integrator-derivative fractional order controller with a Genetic algorithm and compared with a full order controller with a classic optimization method. Simulation results show the performance improvement of the proposed controller with respect to two other controllers in terms of rising time, overshoot, and settling time.

Keywords: speed control loop of permanent magnet synchronous motor, fractional and full order proportional-integrator-derivative controller, coefficients optimization, particle swarm optimization, improvement of behavior

Procedia PDF Downloads 126
14545 Mathematical Modeling of the Effect of Pretreatment on the Drying Kinetics, Energy Requirement and Physico-Functional Properties of Yam (Dioscorea Rotundata) and Cocoyam (Colocasia Esculenta)

Authors: Felix U. Asoiro, Kingsley O. Anyichie, Meshack I. Simeon, Chinenye E. Azuka

Abstract:

The work was aimed at studying the effects of microwave drying (450 W) and hot air oven drying on the drying kinetics and physico-functional properties of yams and cocoyams species. The yams and cocoyams were cut into chips of thicknesses of 3mm, 5mm, 7mm, 9mm, and 11mm. The drying characteristics of yam and cocoyam chips were investigated under microwave drying and hot air oven temperatures (50oC – 90oC). Drying methods, temperature, and thickness had a significant effect on the drying characteristics and physico-functional properties of yam and cocoyam. The result of the experiment showed that an increase in the temperature increased the drying time. The result also showed that the microwave drying method took lesser time to dry the samples than the hot air oven drying method. The iodine affinity of starch for yam was higher than that of cocoyam for the microwaved dried samples over those of hot air oven-dried samples. The results of the analysis would be useful in modeling the drying behavior of yams and cocoyams under different drying methods. It could also be useful in the improvement of shelf life for yams and cocoyams as well as designs of efficient systems for drying, handling, storage, packaging, processing, and transportation of yams and cocoyams.

Keywords: coco yam, drying, microwave, modeling, energy consumption, iodine affinity, drying ate

Procedia PDF Downloads 90
14544 Towards Consensus: Mapping Humanitarian-Development Integration Concepts and Their Interrelationship over Time

Authors: Matthew J. B. Wilson

Abstract:

Disaster Risk Reduction relies heavily on the effective cooperation of both humanitarian and development actors, particularly in the wake of a disaster, implementing lasting recovery measures that better protect communities from disasters to come. This can be seen to fit within a broader discussion around integrating humanitarian and development work stretching back to the 1980s. Over time, a number of key concepts have been put forward, including Linking Relief, Rehabilitation, and Development (LRRD), Early Recovery (ER), ‘Build Back Better’ (BBB), and the most recent ‘Humanitarian-Development-Peace Nexus’ or ‘Triple Nexus’ (HDPN) to define these goals and relationship. While this discussion has evolved greatly over time, from a continuum to a more integrative synergistic relationship, there remains a lack of consensus around how to describe it, and as such, the reality of effectively closing this gap has yet to be seen. The objective of this research was twofold. First, to map these four identified concepts (LRRD, ER, BBB & HDPN) used in the literature since 1995 to understand the overall trends in how this relationship is discussed. Second, map articles reference a combination of these concepts to understand their interrelationship. A scoping review was conducted for each concept identified. Results were gathered from Google Scholar by firstly inputting specific boolean search phrases for each concept as they related specifically to disasters each year since 1995 to identify the total number of articles discussing each concept over time. A second search was then done by pairing concepts together within a boolean search phrase and inputting the results into a matrix to understand how many articles contained references to more than one of the concepts. This latter search was limited to articles published after 2017 to account for the more recent emergence of HDPN. It was found that ER and particularly BBB are referred to much more widely than LRRD and HDPN. ER increased particularly in the mid-2000’s coinciding with the formation of the ER cluster, and BBB, whilst emerging gradually in the mid-2000s due to its usage in the wake of the Boxing Day Tsunami, increased significantly from about 2015 after its prominent inclusion in Sendai Framework. HDPN has only just started to increase in the last 4-5 years. In regards to the relationship between concepts, it was found the vast majority of all concepts identified were referred to in isolation from each other. The strongest relationship was between LRRD and HDPN (8% of articles referring to both), whilst ER-BBB and ER-HDPN both were about 3%, LRRD-ER 2%, and BBB-HDPN 1% and BBB-LRRD 1%. This research identified a fundamental issue around the lack of consensus and even awareness of different approaches referred to within academic literature relating to integrating humanitarian and development work. More research into synthesizing and learning from a range of approaches could work towards better closing this gap.

Keywords: build back better, disaster risk reduction, early recovery, linking relief rehabilitation and development, humanitarian development integration, humanitarian-development (peace) nexus, recovery, triple nexus

Procedia PDF Downloads 67
14543 In vitro Effects of Amygdalin on the Functional Competence of Rabbit Spermatozoa

Authors: Marek Halenár, Eva Tvrdá, Tomáš Slanina, Ľubomír Ondruška, Eduard Kolesár, Peter Massányi, Adriana Kolesárová

Abstract:

The present in vitro study was designed to reveal whether amygdalin (AMG) is able to cause changes to the motility, viability and mitochondrial activity of rabbit spermatozoa. New Zealand White rabbits (n = 10) aged four months were used in the study. Semen samples were collected from each animal and used for the in vitro incubation. The samples were divided into five equal parts and diluted with saline supplemented with 0, 0.5, 1, 2.5 and 5 mg/mL AMG. At times 0h, 3h and 5h spermatozoa motion parameters were assessed using the SpermVision™ computer-aided sperm analysis (CASA) system, cell viability was examined with the metabolic activity (MTT) assay, and the eosin-nigrosin staining technique was used to evaluate the viability of rabbit spermatozoa. All AMG concentrations exhibited stimulating effects on the spermatozoa activity, as shown by a significant preservation of the motility (P<0.05 with respect to 0.5 mg/mL and 1 mg/mL AMG; Time 5 h) and mitochondrial activity (P< 0.05 in case of 0.5 mg/mL AMG; P< 0.01 in case of 1 mg/mL AMG; P < 0.001 with respect to 2.5 mg/mL and 5 mg/mL AMG; Time 5 h). None of the AMG doses supplemented had any significant impact of the spermatozoa viability. In conclusion, the data revealed that short-term co-incubation of spermatozoa with AMG may result in a higher preservation of the sperm structural integrity and functional activity.

Keywords: amygdalin, CASA, mitochondrial activity, motility, rabbits, spermatozoa, viability

Procedia PDF Downloads 317
14542 The Effect of Artificial Intelligence on Urbanism, Architecture and Environmental Conditions

Authors: Abanoub Rady Shaker Saleb

Abstract:

Nowadays, design and architecture are being affected and underwent change with the rapid advancements in technology, economics, politics, society and culture. Architecture has been transforming with the latest developments after the inclusion of computers into design. Integration of design into the computational environment has revolutionized the architecture and new perspectives in architecture have been gained. The history of architecture shows the various technological developments and changes in which the architecture has transformed with time. Therefore, the analysis of integration between technology and the history of the architectural process makes it possible to build a consensus on the idea of how architecture is to proceed. In this study, each period that occurs with the integration of technology into architecture is addressed within historical process. At the same time, changes in architecture via technology are identified as important milestones and predictions with regards to the future of architecture have been determined. Developments and changes in technology and the use of technology in architecture within years are analyzed in charts and graphs comparatively. The historical process of architecture and its transformation via technology are supported with detailed literature review and they are consolidated with the examination of focal points of 20th-century architecture under the titles; parametric design, genetic architecture, simulation, and biomimicry. It is concluded that with the historical research between past and present; the developments in architecture cannot keep up with the advancements in technology and recent developments in technology overshadow the architecture, even the technology decides the direction of architecture. As a result, a scenario is presented with regards to the reach of technology in the future of architecture and the role of the architect.

Keywords: design and development the information technology architecture, enterprise architecture, enterprise architecture design result, TOGAF architecture development method (ADM)

Procedia PDF Downloads 48
14541 Speed Breaker/Pothole Detection Using Hidden Markov Models: A Deep Learning Approach

Authors: Surajit Chakrabarty, Piyush Chauhan, Subhasis Panda, Sujoy Bhattacharya

Abstract:

A large proportion of roads in India are not well maintained as per the laid down public safety guidelines leading to loss of direction control and fatal accidents. We propose a technique to detect speed breakers and potholes using mobile sensor data captured from multiple vehicles and provide a profile of the road. This would, in turn, help in monitoring roads and revolutionize digital maps. Incorporating randomness in the model formulation for detection of speed breakers and potholes is crucial due to substantial heterogeneity observed in data obtained using a mobile application from multiple vehicles driven by different drivers. This is accomplished with Hidden Markov Models, whose hidden state sequence is found for each time step given the observables sequence, and are then fed as input to LSTM network with peephole connections. A precision score of 0.96 and 0.63 is obtained for classifying bumps and potholes, respectively, a significant improvement from the machine learning based models. Further visualization of bumps/potholes is done by converting time series to images using Markov Transition Fields where a significant demarcation among bump/potholes is observed.

Keywords: deep learning, hidden Markov model, pothole, speed breaker

Procedia PDF Downloads 130
14540 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 215
14539 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation

Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson

Abstract:

The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.

Keywords: fibrosis, transplant, renal, rejection

Procedia PDF Downloads 217
14538 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions

Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren

Abstract:

Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.

Keywords: fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB

Procedia PDF Downloads 127
14537 Development of 3D Laser Scanner for Robot Navigation

Authors: Ali Emre Öztürk, Ergun Ercelebi

Abstract:

Autonomous robotic systems needs an equipment like a human eye for their movement. Robotic camera systems, distance sensors and 3D laser scanners have been used in the literature. In this study a 3D laser scanner has been produced for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper. Furthermore for the laser scanner a motor driver, an embedded system control board has been used and at the same time a user interface card has been used to make the communication between those cards and computer. Due to this laser scanner, the density of the objects, the distance between the objects and the necessary path ways for the robot can be calculated. The data collected by the laser scanner system is converted in to cartesian coordinates to be modeled in AutoCAD program. This study shows also the synchronization between the computer user interface, AutoCAD and the embedded systems. As a result it makes the solution cheaper for such systems. The scanning results are enough for an autonomous robot but the scan cycle time should be developed. This study makes also contribution for further studies between the hardware and software needs since it has a powerful performance and a low cost.

Keywords: 3D laser scanner, embedded system, 1D laser range finder, 3D model

Procedia PDF Downloads 264
14536 Improvement of Ground Water Quality Index Using Citrus limetta

Authors: Rupas Kumar M., Saravana Kumar M., Amarendra Kumar S., Likhita Komal V., Sree Deepthi M.

Abstract:

The demand for water is increasing at an alarming rate due to rapid urbanization and increase in population. Due to freshwater scarcity, Groundwater became the necessary source of potable water to major parts of the world. This problem of freshwater scarcity and groundwater dependency is very severe particularly in developing countries and overpopulated regions like India. The present study aimed at evaluating the Ground Water Quality Index (GWQI), which represents overall quality of water at certain location and time based on water quality parameters. To evaluate the GWQI, sixteen water quality parameters have been considered viz. colour, pH, electrical conductivity, total dissolved solids, turbidity, total hardness, alkalinity, calcium, magnesium, sodium, chloride, nitrate, sulphate, iron, manganese and fluorides. The groundwater samples are collected from Kadapa City in Andhra Pradesh, India and subjected to comprehensive physicochemical analysis. The high value of GWQI has been found to be mainly from higher values of total dissolved solids, electrical conductivity, turbidity, alkalinity, hardness, and fluorides. in the present study, citrus limetta (sweet lemon) peel powder has used as a coagulant and GWQI values are recorded in different concentrations to improve GWQI. Sensitivity analysis is also carried out to determine the effect of coagulant dosage, mixing speed and stirring time on GWQI. The research found the maximum percentage improvement in GWQI values are obtained when the coagulant dosage is 100ppm, mixing speed is 100 rpm and stirring time is 10 mins. Alum is also used as a coagulant aid and the optimal ratio of citrus limetta and alum is identified as 3:2 which resulted in best GWQI value. The present study proposes Citrus limetta peel powder as a potential natural coagulant to treat Groundwater and to improve GWQI.

Keywords: alum, Citrus limetta, ground water quality index, physicochemical analysis

Procedia PDF Downloads 212
14535 Big Data Analysis Approach for Comparison New York Taxi Drivers' Operation Patterns between Workdays and Weekends Focusing on the Revenue Aspect

Authors: Yongqi Dong, Zuo Zhang, Rui Fu, Li Li

Abstract:

The records generated by taxicabs which are equipped with GPS devices is of vital importance for studying human mobility behavior, however, here we are focusing on taxi drivers' operation strategies between workdays and weekends temporally and spatially. We identify a group of valuable characteristics through large scale drivers' behavior in a complex metropolis environment. Based on the daily operations of 31,000 taxi drivers in New York City, we classify drivers into top, ordinary and low-income groups according to their monthly working load, daily income, daily ranking and the variance of the daily rank. Then, we apply big data analysis and visualization methods to compare the different characteristics among top, ordinary and low income drivers in selecting of working time, working area as well as strategies between workdays and weekends. The results verify that top drivers do have special operation tactics to help themselves serve more passengers, travel faster thus make more money per unit time. This research provides new possibilities for fully utilizing the information obtained from urban taxicab data for estimating human behavior, which is not only very useful for individual taxicab driver but also to those policy-makers in city authorities.

Keywords: big data, operation strategies, comparison, revenue, temporal, spatial

Procedia PDF Downloads 215
14534 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks

Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram

Abstract:

In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.

Keywords: backoff, contention window, CWMIDB, MANET

Procedia PDF Downloads 260
14533 Study of Rehydration Process of Dried Squash (Cucurbita pepo) at Different Temperatures and Dry Matter-Water Ratios

Authors: Sima Cheraghi Dehdezi, Nasser Hamdami

Abstract:

Air-drying is the most widely employed method for preserving fruits and vegetables. Most of the dried products must be rehydrated by immersion in water prior to their use, so the study of rehydration kinetics in order to optimize rehydration phenomenon has great importance. Rehydration typically composes of three simultaneous processes: the imbibition of water into dried material, the swelling of the rehydrated products and the leaching of soluble solids to rehydration medium. In this research, squash (Cucurbita pepo) fruits were cut into 0.4 cm thick and 4 cm diameter slices. Then, squash slices were blanched in a steam chamber for 4 min. After cooling to room temperature, squash slices were dehydrated in a hot air dryer, under air flow 1.5 m/s and air temperature of 60°C up to moisture content of 0.1065 kg H2O per kg d.m. Dehydrated samples were kept in polyethylene bags and stored at 4°C. Squash slices with specified weight were rehydrated by immersion in distilled water at different temperatures (25, 50, and 75°C), various dry matter-water ratios (1:25, 1:50, and 1:100), which was agitated at 100 rpm. At specified time intervals, up to 300 min, the squash samples were removed from the water, and the weight, moisture content and rehydration indices of the sample were determined.The texture characteristics were examined over a 180 min period. The results showed that rehydration time and temperature had significant effects on moisture content, water absorption capacity (WAC), dry matter holding capacity (DHC), rehydration ability (RA), maximum force and stress in dried squash slices. Dry matter-water ratio had significant effect (p˂0.01) on all squash slice properties except DHC. Moisture content, WAC and RA of squash slices increased, whereas DHC and texture firmness (maximum force and stress) decreased with rehydration time. The maximum moisture content, WAC and RA and the minimum DHC, force and stress, were observed in squash slices rehydrated into 75°C water. The lowest moisture content, WAC and RA and the highest DHC, force and stress, were observed in squash slices immersed in water at 1:100 dry matter-water ratio. In general, for all rehydration conditions of squash slices, the highest water absorption rate occurred during the first minutes of process. Then, this rate decreased. The highest rehydration rate and amount of water absorption occurred in 75°C.

Keywords: dry matter-water ratio, squash, maximum force, rehydration ability

Procedia PDF Downloads 302
14532 Different Sampling Schemes for Semi-Parametric Frailty Model

Authors: Nursel Koyuncu, Nihal Ata Tutkun

Abstract:

Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.

Keywords: frailty model, ranked set sampling, efficiency, simple random sampling

Procedia PDF Downloads 196
14531 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 229
14530 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements

Authors: Eseldin Keleb

Abstract:

In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.

Keywords: sieving, reliability, particle size distribution, processing parameters

Procedia PDF Downloads 598
14529 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 318
14528 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect

Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy

Abstract:

Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.

Keywords: genetic algorithms, economic dispatch, pattern search

Procedia PDF Downloads 424
14527 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: optimal control, nonlinear systems, state estimation, Kalman filter

Procedia PDF Downloads 182
14526 Forensic Necropsy-Importance in Wildlife Conservation

Authors: G. V. Sai Soumya, Kalpesh Solanki, Sumit K. Choudhary

Abstract:

Necropsy is another term used for an autopsy, which is known as death examination in the case of animals. It is a complete standardized procedure involving dissection, observation, interpretation, and documentation. Government Bodies like National Tiger Conservation Authority (NTCA) have given standard operating procedures for commencing the necropsies. Necropsies are rarely performed as compared to autopsies performed on human bodies. There are no databases which maintain the count of autopsies in wildlife, but the research in this area has shown a very small number of necropsies. Long back, wildlife forensics came into existence but is coming into light nowadays as there is an increase in wildlife crime cases, including the smuggling of trophies, pooching, and many more. Physical examination in cases of animals is not sufficient to yield fruitful information, and thus postmortem examination plays an important role. Postmortem examination helps in the determination of time since death, cause of death, manner of death, factors affecting the case under investigation, and thus decreases the amount of time required in solving cases. Increasing the rate of necropsies will help forensic veterinary pathologists to build standardized provision and confidence within them, which will ultimately yield a higher success rate in solving wildlife crime cases.

Keywords: necropsy, wildlife crime, postmortem examination, forensic application

Procedia PDF Downloads 126
14525 Entropy Generation Minimization in a Porous Pipe Heat Exchanger under Magnetohydrodynamics Using Cattaneo-Christov Heat Flux

Authors: Saima Ijaz, Muhammad Mushtaq, Sufian Munawar

Abstract:

This article is devoted to studying the second law analysis of the Cattaneo-Christov heat flux for non-Newtonian fluid on a moving porous pipe intensification of the magnetic field and heat source/sink. The non-Newtonian fluid is considered to have Maxwell-fluid characteristics. The Cattaneo-Christov model takes into account the specific relaxation time for heat transfer. The main causes that are responsible for creating entropy generation are viscous dissipation, heat transfer, and joule heating. An analytical method, the Homotopy Analysis Method (HAM), is utilized to solve the non-linear governing equations of the underlying model. Mathematical results are shown with graphs and tables. In this work, all those parameters are sorted out which are responsible for an increase or decrease in entropy generation. Namely, the porosity, magnetic field effects, and heat source/sink rate are in the former category, and Cattaneo-Christov relaxation time is in the latter one. These results are new contributions in the case of internal flow in the pipe and would be helpful for reducing the entropy generation strategies.

Keywords: Cattaneo-Christov heat flux, entropy generation analysis, heat source / sink, joule heating, non-newtonian fluid, porous pipe

Procedia PDF Downloads 16
14524 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning

Authors: Elizabeth M. Seabrook, Nikki S. Rickard

Abstract:

Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.

Keywords: emotion, experience sampling methods, mental health, social media

Procedia PDF Downloads 231
14523 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 134
14522 Experiences and Views of Foundation Phase Teachers When Teaching English First Additional Language in Rural Schools

Authors: Rendani Mercy Makhwathana

Abstract:

This paper intends to explore the experiences and views of Foundation Phase teachers when teaching English First Additional Language in rural public schools. Teachers all over the world are pillars of any education system. Consequently, any education transformation should start with teachers as critical role players in the education system. As a result, teachers’ experiences and views are worth consideration, for they impact on learners learning and the wellbeing of education in general. An exploratory qualitative approach with the use of phenomenological research design was used in this paper. The population for this paper comprised all Foundation Phase teachers in the district. Purposive sampling technique was used to select a sample of 15 Foundation Phase teachers from five rural-based schools. Data was collected through classroom observation and individual face-to-face interviews. Data were categorised, analysed and interpreted. The findings revealed that from time-to-time teachers experiences one or more challenging situations, learners’ low participation in the classroom to lack of resources. This paper recommends that teachers should be provided with relevant resources and support to effectively teach English First Additional Language.

Keywords: the education system, first additional language, foundation phase, intermediate phase, language of learning and teaching, medium of instruction, teacher professional development

Procedia PDF Downloads 76
14521 Application of GPR for Prospection in Two Archaeological Sites at Aswan Area, Egypt

Authors: Abbas Mohamed Abbas, Raafat El-Shafie Fat-Helbary, Karrar Omar El Fergawy, Ahmed Hamed Sayed

Abstract:

The exploration in archaeological area requires non-invasive methods, and hence the Ground Penetrating Radar (GPR) technique is a proper candidate for this task. GPR investigation is widely applied for searching for hidden ancient targets. So, in this paper GPR technique has been used in archaeological investigation. The aim of this study was to obtain information about the subsurface and associated structures beneath two selected sites at the western bank of the River Nile at Aswan city. These sites have archaeological structures of different ages starting from 6thand 12th Dynasties to the Greco-Roman period. The first site is called Nag’ El Gulab, the study area was 30 x 16 m with separating distance 2m between each profile, while the second site is Nag’ El Qoba, the survey method was not in grid but in lines pattern with different lengths. All of these sites were surveyed by GPR model SIR-3000 with antenna 200 MHz. Beside the processing of each profile individually, the time-slice maps have been conducted Nag’ El Gulab site, to view the amplitude changes in a series of horizontal time slices within the ground. The obtained results show anomalies may interpret as presence of associated tombs structures. The probable tombs structures similar in their depth level to the opened tombs in the studied areas.

Keywords: ground penetrating radar, archeology, Nag’ El Gulab, Nag’ El Qoba

Procedia PDF Downloads 375