Search results for: Global performance evaluation service (GPES)
46 Microstructure and Mechanical Characterization of Heat Treated Stir Cast Silica (Sea Sand) Reinforced 7XXX Al Alloy MMCs
Authors: S. S. Sharma, Jagannath K, P. R. Prabhu
Abstract:
Metal matrix composites consists of a metallic matrix combined with dispersed particulate phase as reinforcement. Aluminum alloys have been the primary material of choice for structural components of aircraft since about 1930. Well known performance characteristics, known fabrication costs, design experience, and established manufacturing methods and facilities, are just a few of the reasons for the continued confidence in 7XXX Al alloys that will ensure their use in significant quantities for the time to come. Particulate MMCs are of special interest owing to the low cost of their raw materials (primarily natural river sand here) and their ease of fabrication, making them suitable for applications requiring relatively high volume production. 7XXX Al alloys are precipitation hardenable and therefore amenable for thermomechanical treatment. Al–Zn alloys reinforced with particulate materials are used in aerospace industries in spite of the drawbacks of susceptibility to stress corrosion, poor wettability, poor weldability and poor fatigue resistance. The resistance offered by these particulates for the moving dislocations impart secondary hardening in turn contributes strain hardening. Cold deformation increases lattice defects, which in turn improves the properties of solution treated alloy. In view of this, six different Al–Zn–Mg alloy composites reinforced with silica (3 wt. % and 5 wt. %) are prepared by conventional semisolid synthesizing process. The cast alloys are solution treated and aged. The solution treated alloys are further severely cold rolled to enhance the properties. The hardness and strength values are analyzed and compared with silica free Al – Zn-Mg alloys. Precipitation hardening phenomena is accelerated due to the increased number of potential sites for precipitation. Higher peak hardness and lesser aging time are the characteristics of thermo mechanically treated samples. For obtaining maximum hardness, optimum number and volume of precipitate particles are required. The Al-5Zn-1Mg with 5% SiO2 alloy composite shows better result.
Keywords: Dislocation, hardness, matrix, thermomechanical, precipitation hardening, reinforcement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184845 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand
Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo
Abstract:
An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.
Keywords: Accuracy, design-stage, elemental cost plan, final tender sum, New Zealand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180444 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77443 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.
Keywords: Childhood, KTK test, Physical education, Psychomotor competence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136442 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of the CSC modeling research accommodates conceptual or process models which present general management frameworks and do not relate to acknowledged soft Operations Research methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, objectives, modeling approach, solution methods and software used. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop optimization models for integrated CSCM. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without translating the generic concepts to the context of construction industry.Keywords: Construction supply chain management, modeling, operations research, optimization and simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282541 Production, Characterisation and Assessment of Biomixture Fuels for Compression Ignition Engine Application
Authors: K. Masera, A. K. Hossain
Abstract:
Hardly any neat biodiesel satisfies the European EN14214 standard for compression ignition engine application. To satisfy the EN14214 standard, various additives are doped into biodiesel; however, biodiesel additives might cause other problems such as increase in the particular emission and increased specific fuel consumption. In addition, the additives could be expensive. Considering the increasing level of greenhouse gas GHG emissions and fossil fuel depletion, it is forecasted that the use of biodiesel will be higher in the near future. Hence, the negative aspects of the biodiesel additives will likely to gain much more importance and need to be replaced with better solutions. This study aims to satisfy the European standard EN14214 by blending the biodiesels derived from sustainable feedstocks. Waste Cooking Oil (WCO) and Animal Fat Oil (AFO) are two sustainable feedstocks in the EU (including the UK) for producing biodiesels. In the first stage of the study, these oils were transesterified separately and neat biodiesels (W100 & A100) were produced. Secondly, the biodiesels were blended together in various ratios: 80% WCO biodiesel and 20% AFO biodiesel (W80A20), 60% WCO biodiesel and 40% AFO biodiesel (W60A40), 50% WCO biodiesel and 50% AFO biodiesel (W50A50), 30% WCO biodiesel and 70% AFO biodiesel (W30A70), 10% WCO biodiesel and 90% AFO biodiesel (W10A90). The prepared samples were analysed using Thermo Scientific Trace 1300 Gas Chromatograph and ISQ LT Mass Spectrometer (GC-MS). The GS-MS analysis gave Fatty Acid Methyl Ester (FAME) breakdowns of the fuel samples. It was found that total saturation degree of the samples was linearly increasing (from 15% for W100 to 54% for A100) as the percentage of the AFO biodiesel was increased. Furthermore, it was found that WCO biodiesel was mainly (82%) composed of polyunsaturated FAMEs. Cetane numbers, iodine numbers, calorific values, lower heating values and the densities (at 15 oC) of the samples were estimated by using the mass percentages data of the FAMEs. Besides, kinematic viscosities (at 40 °C and 20 °C), densities (at 15 °C), heating values and flash point temperatures of the biomixture samples were measured in the lab. It was found that estimated and measured characterisation results were comparable. The current study concluded that biomixture fuel samples W60A40 and W50A50 were perfectly satisfying the European EN 14214 norms without any need of additives. Investigation on engine performance, exhaust emission and combustion characteristics will be conducted to assess the full feasibility of the proposed biomixture fuels.
Keywords: Biodiesel, blending, characterisation, CI Engine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80440 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.
Keywords: Dissolvable magnesium, coating, plasma electrolytic oxide, sealer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57939 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91038 Obese and Overweight Women and Public Health Issues in Hillah City, Iraq
Authors: Amean A. Yasir, Zainab Kh. A. Al-Mahdi Al-Amean
Abstract:
In both developed and developing countries, obesity among women is increasing, but in different patterns and at very different speeds. It may have a negative effect on health, leading to reduced life expectancy and/or increased health problems. This research studied the age distribution among obese women, the types of overweight and obesity, and the extent of the problem of overweight/obesity and the obesity etiological factors among women in Hillah city in central Iraq. A total of 322 overweight and obese women were included in the study, those women were randomly selected. The Body Mass Index was used as indicator for overweight/ obesity. The incidence of overweight/obesity among age groups were estimated, the etiology factors included genetic, environmental, genetic/environmental and endocrine disease. The overweight and obese women were screened for incidence of infection and/or diseases. The study found that the prevalence of 322 overweight and obese women in Hillah city in central Iraq was 19.25% and 80.78%, respectively. The obese women types were recorded based on BMI and WHO classification as class-1 obesity (29.81%), class-2 obesity (24.22%) and class-3 obesity (26.70%), the result was discrepancy non-significant, P value < 0.05. The incidence of overweight in women was high among those aged 20-29 years (90.32%), 6.45% aged 30-39 years old and 3.22% among ≥ 60 years old, while the incidence of obesity was 20.38% for those in the age group 20-29 years, 17.30% were 30-39 years, 23.84% were 40-49 years, 16.92% were 50-59 years group and 21.53% were ≥ 60 years age group. These results confirm that the age can be considered as a significant factor for obesity types (P value < 0.0001). The result also showed that the both genetic factors and environmental factors were responsible for incidents of overweight or obesity (84.78%) p value < 0.0001. The results also recorded cases of different repeated infections (skin infection, recurrent UTI and influenza), cancer, gallstones, high blood pressure, type 2 diabetes, and infertility. Weight stigma and bias generally refers to negative attitudes; Obesity can affect quality of life, and the results of this study recorded depression among overweight or obese women. This can lead to sexual problems, shame and guilt, social isolation and reduced work performance. Overweight and Obesity are real problems among women of all age groups and is associated with the risk of diseases and infection and negatively affects quality of life. This result warrants further studies into the prevalence of obesity among women in Hillah City in central Iraq and the immune response of obese women.
Keywords: Obesity, overweight, Iraq, body mass index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127737 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.
Keywords: Lèvy flight, situation awareness, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53736 Prospects of Iraq’s Maritime Openness and Their Effect on Its Economy
Authors: Mohanad Hammad
Abstract:
Port institutions serve as a link connecting the land areas that receive the goods and the areas from where ships sail. These areas hold great significance for the conversion of goods into commodities of economic value, capable of meeting the needs of the society. Development of ports constitutes a fundamental component of the comprehensive economic development process. Recognizing this fact, developing countries have always resorted to this infrastructural element to resolve the numerous problems they face, taking into account its contribution to the reformation of their economic conditions. Iraqi ports have played a major role in boosting the commercial movement in Iraq, as they are the starting point of its oil exports and a key constituent in fulfilling the consumer and production needs of the various economic sectors of Iraq. With the Gulf wars and the economic blockade, Iraqi ports have continued to deteriorate and become unable to perform their functions as first-generation ports, prompting Iraq to use the ports of neighboring countries such as Jordan's Aqaba commercial port. Meanwhile, Iraqi ports face strong competition from the ports of neighboring countries, which have achieved progress and advancement as opposed to the declining performance and efficiency of Iraqi ports. The great developments in the economic conditions of Iraq lay a too great burden on the Iraqi maritime transport and ports, which require development in order to be able to meet the challenges arising from the fierce international and regional competition in the markets. Therefore, it is necessary to find appropriate solutions in support of the role that can be played by Iraqi ports in serving Iraq's foreign trade transported by sea and in keeping up with the development of foreign trade. Thus, this research aims at tackling the current situation of the Iraqi ports and their commercial activity and studying the problems and obstacles they face. The research also studies the future prospects of these ports, the potentials of maritime openness to Iraq under the fierce competition of neighboring ports, and the possibility of enhancing Iraqi ports’ competitiveness. Among the results produced by this research is the future scenario it proposes for Iraqi ports, mainly represented in the establishment of Al-Faw Port, which will contribute to a greater openness of maritime transport in Iraq, and the rehabilitation and expansion of existing ports. This research seeks to develop solutions to Iraq ports so that they can be repositioned as a vital means of promoting economic development.
Keywords: Transport, port, regional openness, development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67635 Exercise and Cognitive Function: Time Course of the Effects
Authors: Simon B. Cooper, Stephan Bandelow, Maria L. Nute, John G. Morris, Mary E. Nevill
Abstract:
Previous research has indicated a variable effect of exercise on adolescents’ cognitive function. However, comparisons between studies are difficult to make due to differences in: the mode, intensity and duration of exercise employed; the components of cognitive function measured (and the tests used to assess them); and the timing of the cognitive function tests in relation to the exercise. Therefore, the aim of the present study was to assess the time course (10 and 60min post-exercise) of the effects of 15min intermittent exercise on cognitive function in adolescents. 45 adolescents were recruited to participate in the study and completed two main trials (exercise and resting) in a counterbalanced crossover design. Participants completed 15min of intermittent exercise (in cycles of 1 min exercise, 30s rest). A battery of computer based cognitive function tests (Stroop test, Sternberg paradigm and visual search test) were completed 30 min pre- and 10 and 60min post-exercise (to assess attention, working memory and perception respectively).The findings of the present study indicate that on the baseline level of the Stroop test, 10min following exercise response times were slower than at any other time point on either trial (trial by session time interaction, p = 0.0308). However, this slowing of responses also tended to produce enhanced accuracy 10min post-exercise on the baseline level of the Stroop test (trial by session time interaction, p = 0.0780). Similarly, on the complex level of the visual search test there was a slowing of response times 10 min post-exercise (trial by session time interaction, p = 0.0199). However, this was not coupled with an improvement in accuracy (trial by session time interaction, p = 0.2349). The mid-morning bout of exercise did not affect response times or accuracy across the morning on the Sternberg paradigm. In conclusion, the findings of the present study suggest an equivocal effect of exercise on adolescents' cognitive function. The mid-morning bout of exercise appears to cause a speed-accuracy trade off immediately following exercise on the Stroop test (participants become slower but more accurate), whilst slowing response times on the visual search test and having no effect on performance on the Sternberg paradigm. Furthermore, this work highlights the importance of the timing of the cognitive function tests relative to the exercise and the components of cognitive function examined in future studies.
Keywords: Adolescents, cognitive function, exercise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 313734 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction
Authors: M. D. Haneef, R. B. Randall, Z. Peng
Abstract:
Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time, and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration-based analysis and wear prediction. In present study, a simulation model was developed to investigate the bearing wear behaviour, resulting because of different operating conditions, to complement the vibration analysis. In current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. In addition, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journals and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 μm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behaviour and on the other hand it also helps to establish a co-relation between wear based and vibration based analysis. Therefore, the model provides a cost effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.Keywords: Condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229933 Piezoelectric Bimorph Harvester Based on Different Lead Zirconate Titanate Materials to Enhance Energy Collection
Authors: Irene Perez-Alfaro, Nieves Murillo, Carlos Bernal, Daniel Gil-Hernandez
Abstract:
Nowadays, the increasing applicability of internet of things (IoT) systems has changed the way that the world around is perceived. The massive interconnection of systems by means of sensing, processing and communication, allows multitude of data to be at our fingertips. In this way, countless advances have been made in different fields such as personal care, predictive maintenance in industry, quality control in production processes, security, and in everything imaginable. However, all these electronic systems have in common the need to be electrically powered. In this context, batteries and wires are the most commonly used solutions, but they are not a definitive solution in some applications, because of the attainability, the serviceability, or the performance requirements. Therefore, the need arises to look for other types of solutions based on energy harvesting and long-life electronics. Energy Harvesting can be defined as the action of capturing energy from the environment and store it for an instantaneous use or later use. Among the materials capable of harvesting energy from the environment, such as thermoelectrics, electromagnetics, photovoltaics or triboelectrics, the most suitable is the piezoelectric material. The phenomenon of piezoelectricity is one of the most powerful sources for energy harvesting, ranging from a few micro wats to hundreds of wats, depending on certain factors such as material type, geometry, excitation frequency, mechanical and electrical configurations, among others. In this research work, an exhaustive study is carried out on how different types of piezoelectric materials and electrical configurations influence the maximum power that a bimorph harvester is able to extract from mechanical vibrations. A series of experiments has been carried out in which the manufactured bimorph specimens are excited under fixed inertial vibrational conditions. In addition, in order to evaluate the dependence of the maximum transferred power, different load resistors are tested. In this way, the pure active power that achieves the maximum power transfer can be approximated. In this paper, we present the design of low-cost energy harvesting solutions based on piezoelectric smart materials with tunable frequency. The results obtained show the differences in energy extraction between the PZT materials studied and their electrical configurations. The aim of this work is to gain a better understanding of the behavior of piezoelectric materials, and the design process of bimorph PZT harvesters to optimize environmental energy extraction.
Keywords: Bimorph harvesters, electrical impedance, energy harvesting, piezoelectric, smart material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47132 Capital Accumulation and Unemployment in Namibia, Nigeria, and South Africa
Authors: Abubakar Dikko
Abstract:
The research investigates the causes of unemployment in Namibia, Nigeria and South Africa and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria, and South Africa are great African nations battling with high unemployment rates. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria, and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development, there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). And finally, the African countries experience a slow growth in their Gross fixed capital formation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.Keywords: Capital accumulation, NAIRU, post-Keynesian economics, unemployment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 326731 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations
Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim
Abstract:
A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.
Keywords: Gerotor pump, high speed, simulations, aeronautic, aeration, cavitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56730 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete
Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan
Abstract:
Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.
Keywords: Characterization, furnace, pozzolanic behavior, rice husk ash.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47129 Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature
Authors: A. M. Alaskari, S. E. Oraby
Abstract:
The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Keywords: Dynamic force signals, surface roughness (finish), tool wear and deformation, tool wear modes (nose, flank)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134828 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156327 Effectiveness and Performance of Spatial Communication within Composite Interior Space: The Wayfinding System in the Saudi National Museum as a Case Study
Authors: Afnan T. Bagasi, Donia M. Bettaieb, Abeer Alsobahi
Abstract:
The wayfinding system affects the course of a museum journey for visitors, both directly and indirectly. The design aspects of this system play an important role, making it an effective communication system within the museum space. However, translating the concepts that pertain to its design, and which are based on integration and connectivity in museum space design, such as intelligibility, lacks customization in the form of specific design considerations with reference to the most important approaches. These approaches link the organizational and practical aspects to the semiotic and semantic aspects related to the space syntax by targeting the visual and perceived consistency of visitors. In this context, the present study aims to identify how to apply the concept of intelligibility by employing integration and connectivity to design a wayfinding system in museums as a kind of composite interior space. Using the available plans and images to extrapolate the considerations used to design the wayfinding system in the Saudi National Museum as a case study, a descriptive analytical method was used to understand the basic organizational and Morphological principles of the museum space through the main aspects of space design (the Morphological and the pragmatic). The study’s methodology is based on the description and analysis of the basic organizational and Morphological principles of the museum space at the level of the major Morphological and Pragmatic design layers (based on available pictures and diagrams) and inductive method about applied level of intelligibility in spatial layout in the Hall of Islam and Arabia at the National Museum Saudi Arabia within the framework of a case study through the levels of verification of the properties of the concepts of connectivity and integration. The results indicated that the application of the characteristics of intelligibility is weak on both Pragmatic and Morphological levels. Based on the concept of connective and integration, we conclude the following: (1) High level of reflection of the properties of connectivity on the pragmatic level, (2) Weak level of reflection of the properties of Connectivity at the morphological level (3) Weakness in the level of reflection of the properties of integration in the space sample as a result of a weakness in the application at the morphological and pragmatic level. The study’s findings will assist designers, professionals, and researchers in the field of museum design in understanding the significance of the wayfinding system by delving into it through museum spaces by highlighting the most essential aspects using a clear analytical method.
Keywords: wayfinding system, museum journey, intelligibility, integration, connectivity, interior design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56626 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.
Keywords: Amplitude-independent damping, Homogeneous friction, Pendulum nonlinear dynamics, Structural control, Vibration resonant absorbers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73125 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography
Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana
Abstract:
Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.
Keywords: Apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83024 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application
Authors: D. Berdous, H. Ferfera-Harrar
Abstract:
Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.
Keywords: Antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent hydrogel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170423 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.
Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196822 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: Computational mechanics, lower bound method, reinforced concrete slabs, yield-line.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109521 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: Artificial neural network, back-propagation, tide data, training algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171120 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.
Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 345819 The Resource-Base View of Organization and Innovation: Recognition of Significant Relationship in an Organization
Authors: Francis Deinmodei W. Poazi, Jasmine O. Tamunosiki-Amadi, Maurice Fems
Abstract:
In recent times the resource-based view (RBV) of strategic management has recorded a sizeable attention yet there has not been a considerable scholarly and managerial discourse, debate and attention. As a result, this paper gives special bit of critical reasoning as well as top-notch analyses and relationship between RBV and organizational innovation. The study examines those salient aspects of RBV that basically have the will power in ensuring the organization's capacity to go for innovative capability. In achieving such fit and standpoint, the paper joins other relevant academic discourse and empirical evidence. To this end, a reasonable amount of contributions in setting the ground running for future empirical researches would have been provided. More so, the study is guided and built on the following strength and significance: Firstly, RBV sees resources as heterogeneity which forms a strong point of strength and allows organisations to gain competitive advantage. In order words, competitive advantage can be achieved or delivered to the organization when resources are distinctively utilized in a valuable manner more than the envisaged competitors of the organization. Secondly, RBV is significantly influential in determining the real resources that are available in the organization with a view to locate capabilities within in order to attract more profitability into the organization when applied. Thus, there will be more sustainable growth and success in the ever competitive and emerging market. Thus, to have succinct description of the basic methodologies, the study adopts both qualitative as well as quantitative approach with a view to have a broad samples of opinion in establishing and identifying key and strategic organizational resources to enable managers of resources to gain a competitive advantage as well as generating a sustainable increase and growth in profit. Furthermore, a comparative approach and analysis was used to examine the performance of RBV within the organization. Thus, the following are some of the findings of the study: it is clear that there is a nexus between RBV and growth of competitively viable organizations. More so, in most parts, organizations have heterogeneous resources domiciled in their organizations but not all organizations as it was specifically and intelligently adopting the tenets of RBV to strengthen heterogeneity of resources which allows organisations to gain competitive advantage. Other findings of this study reveal that of managerial perception of RBV with respect to application and transformation of resources to achieve a profitable end. It is against this backdrop, the importance of RBV cannot be overemphasized; the study is strongly convinced and think that RBV view is one focal and distinct approach that is focused on internal to outside strategy which engenders sourcing or generating resources internally as well as having the quest to apply such internally sourced resources diligently to increase or gain competitive advantage.
Keywords: Competitive advantage, innovation, organisation, recognition, resource-based view.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215918 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.
Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39917 Complementing Assessment Processes with Standardized Tests: A Work in Progress
Authors: Amparo Camacho
Abstract:
ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.
Keywords: Assessment, hard skills, soft skills, standardized tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803