Search results for: performance and quality
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21135

Search results for: performance and quality

12945 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University

Authors: Pongthep Bunrueng

Abstract:

This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form. All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline. Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.

Keywords: action research, English pronunciation, phonetics, segmental features, suprasegmental features

Procedia PDF Downloads 302
12944 Security Architecture for Cloud Networking: A Survey

Authors: Vishnu Pratap Singh Kirar

Abstract:

In the cloud computing hierarchy IaaS is the lowest layer, all other layers are built over it. Thus it is the most important layer of cloud and requisite more importance. Along with advantages IaaS faces some serious security related issue. Mainly Security focuses on Integrity, confidentiality and availability. Cloud computing facilitate to share the resources inside as well as outside of the cloud. On the other hand, cloud still not in the state to provide surety to 100% data security. Cloud provider must ensure that end user/client get a Quality of Service. In this report we describe possible aspects of cloud related security.

Keywords: cloud computing, cloud networking, IaaS, PaaS, SaaS, cloud security

Procedia PDF Downloads 534
12943 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 210
12942 A Spatial Perspective on the Metallized Combustion Aspect of Rockets

Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra

Abstract:

Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.

Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations

Procedia PDF Downloads 181
12941 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: cloud computing systems, multicore systems, parallel Delaunay triangulation, parallel surface modeling and generation

Procedia PDF Downloads 209
12940 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 142
12939 Critical Appraisal, Smart City Initiative: China vs. India

Authors: Suneet Jagdev, Siddharth Singhal, Dhrubajyoti Bordoloi, Peesari Vamshidhar Reddy

Abstract:

There is no universally accepted definition of what constitutes a Smart City. It means different things to different people. The definition varies from place to place depending on the level of development and the willingness of people to change and reform. It tries to improve the quality of resource management and service provisions for the people living in the cities. Smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage the assets of a city. But most of these projects are misinterpreted as being technology projects only. Due to urbanization, a lot of informal as well government funded settlements have come up during the last few decades, thus increasing the consumption of the limited resources available. The people of each city have their own definition of Smart City. In the imagination of any city dweller in India is the picture of a Smart City which contains a wish list of infrastructure and services that describe his or her level of aspiration. The research involved a comparative study of the Smart City models in India and in China. Behavioral changes experienced by the people living in the pilot/first ever smart cities have been identified and compared. This paper discussed what is the target of the quality of life for the people in India and in China and how well could that be realized with the facilities being included in these Smart City projects. Logical and comparative analyses of important data have been done, collected from government sources, government papers and research papers by various experts on the topic. Existing cities with historically grown infrastructure and administration systems will require a more moderate step-by-step approach to modernization. The models were compared using many different motivators and the data is collected from past journals, interacting with the people involved, videos and past submissions. In conclusion, we have identified how these projects could be combined with the ongoing small scale initiatives by the local people/ small group of individuals and what might be the outcome if these existing practices were implemented on a bigger scale.

Keywords: behavior change, mission monitoring, pilot smart cities, social capital

Procedia PDF Downloads 293
12938 Interventions to Improve the Performance of Community Based Health Insurance in Low- and Lower Middle-Income-Countries: a Systematic Review

Authors: Scarlet Tabot Enanga Longsti

Abstract:

Community-Based Health Insurance (CBHI) schemes have been proposed as a possible means to achieve affordable health care in low-and lower-middle-income countries. The existing evidence provides mixed results on the impact of CBHI schemes on healthcare utilisation and out -of-pocket payments (OOPP) for healthcare. Over 900 CBHI schemes have been implemented in underdeveloped countries, and these schemes have undergone different modifications over the years. Prior reviews have suggested that different designs of CBHI schemes may result in different outcomes. Objectives: This review sought to determine the interventions that affect the impact of CBHI schemes on OOPP and health service utilisation. Interventions in this study referred to any action or modification in the design of a CBHI scheme that affected the impact of the scheme on OOPP and/or healthcare utilization. Methods: Any CBHI study that was done in a lower middle-income country, that used an experimental design, that included OOPP or health care utilisation as outcome variables, and that was published in either English or French was included in this study. Studies were searched for in MEDLINE, Embase, CINAHL, EconLit, IBSS, Web of Science, Cochrane Library, and Global Index Medicus from July to August 2023. Bias was assessed using Joanna Brigs Institute tools for quality assessment for randomized control trials and quasi experimental studies. A narrative synthesis was done. Results: 12 studies were included in the review, with a total of 69 villages, 13,653 households, and 62,786 participants. Average premium collection was 4.8 USD/year. Most CBHI schemes had flat rates. The study revealed that a range of interventions impact OOPP and health care utilisation. Five categories of interventions were identified. The intervention with the highest impact on OOPP and utilisation was “Audit visits”. Next in line came external funds, training scheme workers, and engaging community leaders and village heads to advertise the scheme. Free healthcare led to a significant increase in utilisation of health services, a significant reduction in Catastrophic health expenditure, but an insignificant effect on OOPP among insured compared with uninsured. Conclusions: Community-Based Health Insurance could pave the way for Universal Health Care in low and middle-income countries. However, this can only be possible if careful thought is given to how schemes are designed. Due to the heterogeneity of studies and results on CBHI schemes, there is need for further research for more effective designs to be developed.

Keywords: community based health insurance, developing countries, health service utilisation, out of pocket payment

Procedia PDF Downloads 70
12937 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 245
12936 Impinging Acoustics Induced Combustion: An Alternative Technique to Prevent Thermoacoustic Instabilities

Authors: Sayantan Saha, Sambit Supriya Dash, Vinayak Malhotra

Abstract:

Efficient propulsive systems development is an area of major interest and concern in aerospace industry. Combustion forms the most reliable and basic form of propulsion for ground and space applications. The generation of large amount of energy from a small volume relates mostly to the flaming combustion. This study deals with instabilities associated with flaming combustion. Combustion is always accompanied by acoustics be it external or internal. Chemical propulsion oriented rockets and space systems are well known to encounter acoustic instabilities. Acoustic brings in changes in inter-energy conversion and alter the reaction rates. The modified heat fluxes, owing to wall temperature, reaction rates, and non-linear heat transfer are observed. The thermoacoustic instabilities significantly result in reduced combustion efficiency leading to uncontrolled liquid rocket engine performance, serious hazards to systems, assisted testing facilities, enormous loss of resources and every year a substantial amount of money is spent to prevent them. Present work attempts to fundamentally understand the mechanisms governing the thermoacoustic combustion in liquid rocket engine using a simplified experimental setup comprising a butane cylinder and an impinging acoustic source. Rocket engine produces sound pressure level in excess of 153 Db. The RL-10 engine generates noise of 180 Db at its base. Systematic studies are carried out for varying fuel flow rates, acoustic levels and observations are made on the flames. The work is expected to yield a good physical insight into the development of acoustic devices that when coupled with the present propulsive devices could effectively enhance combustion efficiency leading to better and safer missions. The results would be utilized to develop impinging acoustic devices that impinge sound on the combustion chambers leading to stable combustion thus, improving specific fuel consumption, specific impulse, reducing emissions, enhanced performance and fire safety. The results can be effectively applied to terrestrial and space application.

Keywords: combustion instability, fire safety, improved performance, liquid rocket engines, thermoacoustics

Procedia PDF Downloads 147
12935 Wear Performance of SLM Fabricated 1.2709 Steel Nanocomposite Reinforced by TiC-WC for Mould and Tooling Applications

Authors: Daniel Ferreira, José M. Marques Oliveira, Filipe Oliveira

Abstract:

Wear phenomena is critical in injection moulding processes, causing failure of the components, and making the parts more expensive with an additional wasting time. When very abrasive materials are being injected inside the steel mould’s cavities, such as polymers reinforced with abrasive fibres, the consequences of the wear are more evident. Maraging steel (1.2709) is commonly employed in moulding components to resist in very aggressive injection conditions. In this work, the wear performance of the SLM produced 1.2709 maraging steel reinforced by ultrafine titanium and tungsten carbide (TiC-WC), was investigated using a pin-on-disk testing apparatus. A polypropylene reinforced with 40 wt.% fibreglass (PP40) disk, was used as the counterpart material. The wear tests were performed at 40 N constant load and 0.4 ms-1 sliding speed at room temperature and humidity conditions. The experimental results demonstrated that the wear rate in the 18Ni300-TiC-WC composite is lower than the unreinforced 18Ni300 matrix. The morphology and chemical composition of the worn surfaces was observed by 3D optical profilometry and scanning electron microscopy (SEM), respectively. The resulting debris, caused by friction, were also analysed by SEM and energy dispersive X-ray spectroscopy (EDS). Their morphology showed distinct shapes and sizes, which indicated that the wear mechanisms, may be different in maraging steel produced by casting and SLM. The coefficient of friction (COF) was recorded during the tests, which helped to elucidate the wear mechanisms involved.

Keywords: selective laser melting, nanocomposites, injection moulding, polypropylene with fibreglass

Procedia PDF Downloads 160
12934 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method

Authors: Xiyang Li, Qi Yu, Lun Zhang

Abstract:

In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.

Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization

Procedia PDF Downloads 93
12933 The Fidget Widget Toolkit: A Positive Intervention Designed and Evaluated to Enhance Wellbeing for People in the Later Stage of Dementia

Authors: Jane E. Souyave, Judith Bower

Abstract:

This study is an ongoing collaborative project between the University of Central Lancashire and the Alzheimer’s Society to design and test the idea of using interactive tools for a person living with dementia and their carers. It is hoped that the tools will fulfill the possible needs of engagement and interaction as dementia progresses, therefore enhancing wellbeing and improving quality of life for the person with dementia and their carers. The project was informed by Kitwood’s five psychological needs for producing wellbeing and explored evidence that fidgeting is often seen as a form of agitation and a negative symptom of dementia. Although therapy for agitation may be well established, there is a lack of appropriate items aimed at people in the later stage of dementia, that are not childlike or medical in their aesthetic. Individuals may fidget in a particular way and the tools in the Fidget Widget Toolkit have been designed to encourage repetitive movements of the hand, specifically to address the abilities of people with relatively advanced dementia. As an intervention, these tools provided a new approach that had not been tested in dementia care. Prototypes were created through an iterative design process and tested with a number of people with dementia and their carers, using quantitative and qualitative methods. Dementia Care Mapping was used to evaluate the impact of the intervention in group settings. Cohen Mansfield’s Agitation Inventory was used to record the daily use and interest of the intervention for people in their usual place of residence. The results informed the design of a new set of devices to promote safe, stigma free fidgeting as a positive experience, meaningful activity and enhance wellbeing for people in the later stage of dementia. The outcomes addressed the needs of individuals by reducing agitation and restlessness through helping them to connect, engage and act independently, providing the means of doing something for themselves that they were able to do. The next stage will be to explore the commercial feasibility of the Fidget Widget Toolkit so that it can be introduced as good practice and innovation in dementia care. It could be used by care homes, with carers and their families to support wellbeing and lead the way in providing some positive experiences and person-centred approaches that are lacking in the later stage of dementia.

Keywords: dementia, design, fidgeting, healthcare, positive moments, quality of life, wellbeing

Procedia PDF Downloads 275
12932 Physico-Chemical and Microbial Changes of Organic Fertilizers after Compositing Processes under Arid Conditions

Authors: Oustani Mabrouka, Halilat Med Tahar

Abstract:

The physico-chemical properties of poultry droppings indicate that this waste can be an excellent way to enrich the soil with low fertility that is the case in arid soils (low organic matter content), but its concentrations in some microbial and chemical components make them potentially dangerous and toxic contaminants if they are used directly in fresh state. On other hand, the accumulation of plant residues in the crop areas can become a source of plant disease and affects the quality of the environment. The biotechnological processes that we have identified appear to alleviate these problems. It leads to the stabilization and processing of wastes into a product of good hygienic quality and high fertilizer value by the composting test. In this context, a trial was conducted in composting operations in the region of Ouargla located in southern Algeria. Composing test was conducted in a completely randomized design experiment. Three mixtures were prepared, in pits of 1 m3 volume for each mixture. Each pit is composed by mixture of poultry droppings and crushed plant residues in amount of 40 and 60% respectively: C1: Droppings + Straw (P.D +S) , C2: Poultry Droppings + Olive Wastes (P.D+O.W) , C3: Poultry Droppings + Date palm residues (P.D+D.P). Before and after the composting process, physico-chemical parameters (temperature, moisture, pH, electrical conductivity, total carbon and total nitrogen) were studied. The stability of the biological system was noticed after 90 days. The results of physico-chemical and microbiological compost obtained from three mixtures: C1: (P.D +S) , C2: (P.D+O.W) and C3: (P.D +D.P) shows at the end of composting process, three composts characterized by the final products were characterized by their high agronomic and environmental interest with a good physico chemical characteristics in particularly a low C/N ratio with 15.15, 10.01 and 15.36 % for (P.D + S), (P.D. + O.W) and (P.D. +D.P), respectively, reflecting a stabilization and maturity of the composts. On the other hand, a significant increase of temperature was recorded at the first days of composting for all treatments, which is correlated with a strong reduction of the pathogenic micro flora contained in poultry dropings.

Keywords: Arid environment, Composting, Date palm residues, Olive wastes, pH, Pathogenic microorganisms, Poultry Droppings, Straw

Procedia PDF Downloads 239
12931 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 166
12930 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 188
12929 Family and Community Care for the Elderly: An Implementation Research in Local Community, Thailand

Authors: Sumattana Glangkarn, Vorapoj Promasatayaprot

Abstract:

Background: Proportion of population ageing in Thailand has been increased rapidly in the past decades according to living longer and the fertility rates have decreased. The most important challenge related to this situation is to consider how to improve quality and years of healthy of life. This study aimed to implement the older persons’ long term care (LTC) system for elderly care by family and community. Method: The Consolidated Framework for Implementation Research (CFIR) was employed for guiding and evaluating an implementation process in ageing care. The CFIR composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Results: most elderly participants were couples, educating primary school and living with children and grandchildren. More than half of them had chronic diseases such as diabetes mellitus and hypertension. Factor analysis revealed factors related to health care of older participants which consisted of exercise, diet, accidental prevention, relaxation, self-care capacity, joyfulness, family relationship, and personal hygiene. A pre-implementation phase showed intervention characteristics included facilities and services of the LTC policy from the Ministry of Public Health. The complexities of the LTC and relative advantages were explained. Community leaders, public health volunteers, care givers and health professionals had participated in the LTC activities. Outer and inner settings consisted of context of community, culture, and readiness. Characteristics of the individuals related to knowledge, self-efficacy, perceptions, and believes. The process consisted of planning, acting, observing, and reflecting. The implementation outcomes and service outcomes had been evaluated during-implementation phase. Conclusion: the participation of caregivers, community leaders, public health volunteers, and health professionals had supported the LTC services. Thus, family and community care could improve quality of life of the ageing.

Keywords: ageing, CFIR, long term care, implementation

Procedia PDF Downloads 180
12928 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston

Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando

Abstract:

The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.

Keywords: computing paradigm, emerging technologies, equitable healthcare, healthy cities, open data, smart city, social innovation

Procedia PDF Downloads 338
12927 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence

Authors: Yves Alain Ach, Sandra Rmadi Said

Abstract:

An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.

Keywords: brand related information, brand value, information disclosure, determinants

Procedia PDF Downloads 88
12926 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 129
12925 Structural Insulated Panels

Authors: R. Padmini, G. V. Manoj Kumar

Abstract:

Structural insulated panels (SIPs) are a high-performance building system for residential and light commercial construction. The panels consist of an insulating foam core sandwiched between two structural facings, typically oriented strand board (OSB). SIPs are manufactured under factory controlled conditions and can be fabricated to fit nearly any building design. The result is a building system that is extremely strong, energy efficient and cost effective. Building with SIPs will save you time, money and labor. Building with SIPs generally costs about the same as building with wood frame construction when you factor in the labor savings resulting from shorter construction time and less job-site waste. Other savings are realized because smaller heating and cooling systems are required with SIP construction. Structural insulated panels (SIPs) are one of the most airtight and well-insulated building systems available, making them an inherently green product. An airtight SIP building will use less energy to heat and cool, allow for better control over indoor environmental conditions, and reduce construction waste. Green buildings use less energy, reducing carbon dioxide emissions and playing an important role in combating global climate change. Buildings also use a tremendous amount of natural resources to construct and operate. Constructing green buildings that use these resources more efficiently, while minimizing pollution that can harm renewable natural resources, is crucial to a sustainable future.

Keywords: high performance, under factory controlled, wood frame, carbon dioxide emissions, natural resources

Procedia PDF Downloads 440
12924 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 126
12923 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.

Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index

Procedia PDF Downloads 160
12922 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing

Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.

Abstract:

Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.

Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target

Procedia PDF Downloads 334
12921 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement

Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini

Abstract:

Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.

Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis

Procedia PDF Downloads 143
12920 Project-Based Learning Application: Applying Systems Thinking Concepts to Assure Continuous Improvement

Authors: Kimberley Kennedy

Abstract:

The major findings of this study discuss the importance of understanding and applying Systems thinking concepts to ensure an effective Project-Based Learning environment. A pilot project study of a major pedagogical change was conducted over a five year period with the goal to give students real world, hands-on learning experiences and the opportunity to apply what they had learned over the past two years of their business program. The first two weeks of the fifteen week semester utilized teaching methods of lectures, guest speakers and design thinking workshops to prepare students for the project work. For the remaining thirteen weeks of the semester, the students worked with actual business owners and clients on projects and challenges. The first three years of the five year study focused on student feedback to ensure a quality learning experience and continuous improvement process was developed. The final two years of the study, examined the conceptual understanding and perception of learning and teaching by faculty using Project-Based Learning pedagogy as compared to lectures and more traditional teaching methods was performed. Relevant literature was reviewed and data collected from program faculty participants who completed pre-and post-semester interviews and surveys over a two year period. Systems thinking concepts were applied to better understand the challenges for faculty using Project-Based Learning pedagogy as compared to more traditional teaching methods. Factors such as instructor and student fatigue, motivation, quality of work and enthusiasm were explored to better understand how to provide faculty with effective support and resources when using Project-Based Learning pedagogy as the main teaching method. This study provides value by presenting generalizable, foundational knowledge by offering suggestions for practical solutions to assure student and teacher engagement in Project-Based Learning courses.

Keywords: continuous improvement, project-based learning, systems thinking, teacher engagement

Procedia PDF Downloads 126
12919 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 133
12918 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.

Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE

Procedia PDF Downloads 430
12917 Numerical Simulation of the Dynamic Behavior of a LaNi5 Water Pumping System

Authors: Miled Amel, Ben Maad Hatem, Askri Faouzi, Ben Nasrallah Sassi

Abstract:

Metal hydride water pumping system uses hydrogen as working fluid to pump water for low head and high discharge. The principal operation of this pump is based on the desorption of hydrogen at high pressure and its absorption at low pressure by a metal hydride. This work is devoted to study a concept of the dynamic behavior of a metal hydride pump using unsteady model and LaNi5 as hydriding alloy. This study shows that with MHP, it is possible to pump 340l/kg-cycle of water in 15 000s using 1 Kg of LaNi5 at a desorption temperature of 360 K, a pumping head equal to 5 m and a desorption gear ratio equal to 33. This study reveals also that the error given by the steady model, using LaNi5 is about 2%.A dimensional mathematical model and the governing equations of the pump were presented to predict the coupled heat and mass transfer within the MHP. Then, a numerical simulation is carried out to present the time evolution of the specific water discharge and to test the effect of different parameters (desorption temperature, absorption temperature, desorption gear ratio) on the performance of the water pumping system (specific water discharge, pumping efficiency and pumping time). In addition, a comparison between results obtained with steady and unsteady model is performed with different hydride mass. Finally, a geometric configuration of the reactor is simulated to optimize the pumping time.

Keywords: dynamic behavior, LaNi5, performance of water pumping system, unsteady model

Procedia PDF Downloads 208
12916 The Evaluation of a Novel Cardiac Index derived from Anthropometric and Biochemical Parameters in Pediatric Morbid Obesity and Metabolic Syndrome

Authors: Mustafa Metin Donma

Abstract:

Metabolic syndrome (MetS) components are noteworthy among children with obesity and morbid obesity because they point out the cases with MetS, which have the great tendency to severe health problems such as cardiovascular diseases both in childhood and adulthood. In clinical practice, considerable efforts are being observed to bring into the open the striking differences between morbid obese cases and those with MetS findings. The most privileged aspect is concerning cardiometabolic features. The aim of this study was to derive an index which behaves different in children with and without MetS from the cardiac point of view. For the purpose, aspartate transaminase (AST), a cardiac enzyme still being used independently to predict cardiac-related problems, was used. One hundred and twenty four children were recruited from the outpatient clinic of Department of Pediatrics in Tekirdag Namik Kemal University, Faculty of Medicine. Forty-three children with normal body mass index, forty-one and forty morbid obese (MO) children with MetS and without the characteristic features of MetS, respectively, were included in the study. Weight, height, waist circumference (WC), hip C (HC), head C (HdC), neck C (NC), systolic and diastolic blood pressure values were measured and recorded. Body mass index and anthropometric ratios were calculated. Fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein cholesterol (HDL-C) analyses were performed. The values for AST, alanin transaminase (ALT) and AST/ALT were obtained. Advanced Donma cardiac index (ADCI) values were calculated. The formula for the index was [(TRG/HDL-C) * (INS/FBG)] * [(WC+HC)/Height] * [(HdC+NC)/Height]. Statistical evaluations including correlation analysis were done by a statistical package program. The statistical significance degree was accepted as p<0.05. The index, ADCI, was developed from both anthropometric and biochemical parameters. All anthropometric measurements except weight were included in the equation. Besides all biochemical parameters concerning MetS components were also added. This index was tested in each of three groups. Its performance was compared with the performance of cardiometabolic index (CMI). It was also checked whether it was compatible with AST activity. The performance of ADCI was better than that of CMI. Instead of double increase, the increase of three times was observed in children with MetS compared to MO children. The index was correlated with AST in MO group and with AST/ALT in MetS group. In conclusion, this index was superior in discovering cardiac problems in MO and in diagnosing MetS in MetS groups. It was also arbiter to point out cardiovascular and MetS aspects among the groups.

Keywords: aspartate transaminase, cardiac, children, index, obesity

Procedia PDF Downloads 69