Search results for: energy performance index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8711

Search results for: energy performance index

41 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision making has not been far-fetched. Proper classification of these textual information in a given context has also been very difficult. As a result, a systematic review was conducted from previous literature on sentiment classification and AI-based techniques. The study was done in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that could correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy using the knowledge gain from the evaluation of different artificial intelligence techniques reviewed. The study evaluated over 250 articles from digital sources like ACM digital library, Google Scholar, and IEEE Xplore; and whittled down the number of research to 52 articles. Findings revealed that deep learning approaches such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Bidirectional Encoder Representations from Transformer (BERT), and Long Short-Term Memory (LSTM) outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also required to develop a robust sentiment classifier. Results also revealed that data can be obtained from places like Twitter, movie reviews, Kaggle, Stanford Sentiment Treebank (SST), and SemEval Task4 based on the required domain. The hybrid deep learning techniques like CNN+LSTM, CNN+ Gated Recurrent Unit (GRU), CNN+BERT outperformed single deep learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of development simplicity and AI-based library functionalities. Finally, the study recommended the findings obtained for building robust sentiment classifier in the future.

Keywords: Artificial Intelligence, Natural Language Processing, Sentiment Analysis, Social Network, Text.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
40 Exercise and Cognitive Function: Time Course of the Effects

Authors: Simon B. Cooper, Stephan Bandelow, Maria L. Nute, John G. Morris, Mary E. Nevill

Abstract:

Previous research has indicated a variable effect of exercise on adolescents’ cognitive function. However, comparisons between studies are difficult to make due to differences in: the mode, intensity and duration of exercise employed; the components of cognitive function measured (and the tests used to assess them); and the timing of the cognitive function tests in relation to the exercise. Therefore, the aim of the present study was to assess the time course (10 and 60min post-exercise) of the effects of 15min intermittent exercise on cognitive function in adolescents. 45 adolescents were recruited to participate in the study and completed two main trials (exercise and resting) in a counterbalanced crossover design. Participants completed 15min of intermittent exercise (in cycles of 1 min exercise, 30s rest). A battery of computer based cognitive function tests (Stroop test, Sternberg paradigm and visual search test) were completed 30 min pre- and 10 and 60min post-exercise (to assess attention, working memory and perception respectively).The findings of the present study indicate that on the baseline level of the Stroop test, 10min following exercise response times were slower than at any other time point on either trial (trial by session time interaction, p = 0.0308). However, this slowing of responses also tended to produce enhanced accuracy 10min post-exercise on the baseline level of the Stroop test (trial by session time interaction, p = 0.0780). Similarly, on the complex level of the visual search test there was a slowing of response times 10 min post-exercise (trial by session time interaction, p = 0.0199). However, this was not coupled with an improvement in accuracy (trial by session time interaction, p = 0.2349). The mid-morning bout of exercise did not affect response times or accuracy across the morning on the Sternberg paradigm. In conclusion, the findings of the present study suggest an equivocal effect of exercise on adolescents' cognitive function. The mid-morning bout of exercise appears to cause a speed-accuracy trade off immediately following exercise on the Stroop test (participants become slower but more accurate), whilst slowing response times on the visual search test and having no effect on performance on the Sternberg paradigm. Furthermore, this work highlights the importance of the timing of the cognitive function tests relative to the exercise and the components of cognitive function examined in future studies. 

Keywords: Adolescents, cognitive function, exercise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3137
39 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis

Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński

Abstract:

The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g., phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g., from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.

Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
38 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time, and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration-based analysis and wear prediction. In present study, a simulation model was developed to investigate the bearing wear behaviour, resulting because of different operating conditions, to complement the vibration analysis. In current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. In addition, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journals and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 μm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behaviour and on the other hand it also helps to establish a co-relation between wear based and vibration based analysis. Therefore, the model provides a cost effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: Condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
37 Construction Port Requirements for Floating Offshore Wind Turbines

Authors: Alan Crowle, Philpp Thies

Abstract:

s the floating offshore wind turbine industry continues to develop and grow, the capabilities of established port facilities need to be assessed as to their ability to support the expanding construction and installation requirements. This paper assesses current infrastructure requirements and projected changes to port facilities that may be required to support the floating offshore wind industry. Understanding the infrastructure needs of the floating offshore renewable industry will help to identify the port-related requirements. Floating offshore wind turbines can be installed further out to sea and in deeper waters than traditional fixed offshore wind arrays, meaning it can take advantage of stronger winds. Separate ports are required for substructure construction, fit-out of the turbines, moorings, subsea cables and maintenance. Large areas are required for the laydown of mooring equipment, inter array cables, turbine blades and nacelles. The capabilities of established port facilities to support floating wind farms are assessed by evaluation of size of substructures, height of wind turbine with regards to the cranes for fitting of blades, distance to offshore site and offshore installation vessel characteristics. The paper will discuss the advantages and disadvantages of using large land based cranes, inshore floating crane vessels or offshore crane vessels at the fit-out port for the installation of the turbine. Water depths requirements for import of materials and export of the completed structures will be considered. There are additional costs associated with any emerging technology. However, part of the popularity of Floating Offshore Wind Turbines stems from the cost savings against permanent structures like fixed wind turbines. Floating Offshore Wind Turbine developers can benefit from lighter, more cost effective equipment which can be assembled in port and towed to site rather than relying on large, expensive installation vessels to transport and erect fixed bottom turbines. The ability to assemble Floating Offshore Wind Turbines equipment on shore means minimising highly weather dependent operations like offshore heavy lifts and assembly, saving time and costs and reducing safety risks for offshore workers. Maintenance might take place in safer onshore conditions for barges and semi submersibles. Offshore renewables, such as floating wind, can take advantage of this wealth of experience, while oil and gas operators can deploy this experience at the same time as entering the renewables space. The floating offshore wind industry is in the early stages of development and port facilities are required for substructure fabrication, turbine manufacture, turbine construction and maintenance support. The paper discusses the potential floating wind substructures as this provides a snapshot of the requirements at the present time, and potential technological developments required for commercial development. Scaling effects of demonstration-scale projects will be addressed; however the primary focus will be on commercial-scale (30+ units) device floating wind energy farms.

Keywords: Floating offshore wind turbine, port logistics, installation, construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
36 Capital Accumulation and Unemployment in Namibia, Nigeria, and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria, and South Africa are great African nations battling with high unemployment rates. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria, and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development, there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). And finally, the African countries experience a slow growth in their Gross fixed capital formation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: Capital accumulation, NAIRU, post-Keynesian economics, unemployment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3267
35 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations

Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim

Abstract:

A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.

Keywords: Gerotor pump, high speed, simulations, aeronautic, aeration, cavitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 567
34 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete

Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan

Abstract:

Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.

Keywords: Characterization, furnace, pozzolanic behavior, rice husk ash.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 471
33 Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature

Authors: A. M. Alaskari, S. E. Oraby

Abstract:

The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.

Keywords: Dynamic force signals, surface roughness (finish), tool wear and deformation, tool wear modes (nose, flank)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
32 Effectiveness and Performance of Spatial Communication within Composite Interior Space: The Wayfinding System in the Saudi National Museum as a Case Study

Authors: Afnan T. Bagasi, Donia M. Bettaieb, Abeer Alsobahi

Abstract:

The wayfinding system affects the course of a museum journey for visitors, both directly and indirectly. The design aspects of this system play an important role, making it an effective communication system within the museum space. However, translating the concepts that pertain to its design, and which are based on integration and connectivity in museum space design, such as intelligibility, lacks customization in the form of specific design considerations with reference to the most important approaches. These approaches link the organizational and practical aspects to the semiotic and semantic aspects related to the space syntax by targeting the visual and perceived consistency of visitors. In this context, the present study aims to identify how to apply the concept of intelligibility by employing integration and connectivity to design a wayfinding system in museums as a kind of composite interior space. Using the available plans and images to extrapolate the considerations used to design the wayfinding system in the Saudi National Museum as a case study, a descriptive analytical method was used to understand the basic organizational and Morphological principles of the museum space through the main aspects of space design (the Morphological and the pragmatic). The study’s methodology is based on the description and analysis of the basic organizational and Morphological principles of the museum space at the level of the major Morphological and Pragmatic design layers (based on available pictures and diagrams) and inductive method about applied level of intelligibility in spatial layout in the Hall of Islam and Arabia at the National Museum Saudi Arabia within the framework of a case study through the levels of verification of the properties of the concepts of connectivity and integration. The results indicated that the application of the characteristics of intelligibility is weak on both Pragmatic and Morphological levels. Based on the concept of connective and integration, we conclude the following: (1) High level of reflection of the properties of connectivity on the pragmatic level, (2) Weak level of reflection of the properties of Connectivity at the morphological level (3) Weakness in the level of reflection of the properties of integration in the space sample as a result of a weakness in the application at the morphological and pragmatic level. The study’s findings will assist designers, professionals, and researchers in the field of museum design in understanding the significance of the wayfinding system by delving into it through museum spaces by highlighting the most essential aspects using a clear analytical method.

Keywords: wayfinding system, museum journey, intelligibility, integration, connectivity, interior design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 566
31 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: Amplitude-independent damping, Homogeneous friction, Pendulum nonlinear dynamics, Structural control, Vibration resonant absorbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
30 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR

Authors: Ivana Scidà, Francesco Alotto, Anna Osello

Abstract:

Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.

Keywords: Building Information Modelling, digital learning, education, virtual laboratory, virtual reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
29 Health and Greenhouse Gas Emission Implications of Reducing Meat Intakes in Hong Kong

Authors: Cynthia Sau Chun Yip, Richard Fielding

Abstract:

High meat and especially red meat intakes are significantly and positively associated with a multiple burden of diseases and also high greenhouse gas (GHG) emissions. This study investigated population meat intake patterns in Hong Kong. It quantified the burden of disease and GHG emission outcomes by modeling to adjust Hong Kong population meat intakes to recommended healthy levels. It compared age- and sex-specific population meat, fruit and vegetable intakes obtained from a population survey among adults aged 20 years and over in Hong Kong in 2005-2007, against intake recommendations suggested in the Modelling System to Inform the Revision of the Australian Guide to Healthy Eating (AGHE-2011-MS) technical document. This study found that meat and meat alternatives, especially red meat intakes among Hong Kong males aged 20+ years and over are significantly higher than recommended. Red meat intakes among females aged 50-69 years and other meat and alternatives intakes among aged 20-59 years are also higher than recommended. Taking the 2005-07 age- and sex-specific population meat intake as baselines, three counterfactual scenarios of adjusting Hong Kong adult population meat intakes to AGHE-2011-MS and Pre-2011 AGHE recommendations by the year 2030 were established. Consequent energy intake gaps were substituted with additional legume, fruit and vegetable intakes. To quantify the consequent GHG emission outcomes associated with Hong Kong meat intakes, Cradle-to-ready-to-eat lifecycle assessment emission outcome modelling was used. Comparative risk assessment of burden of disease model was used to quantify the health outcomes. This study found adjusting meat intakes to recommended levels could reduce Hong Kong GHG emission by 17%-44% when compared against baseline meat intake emissions, and prevent 2,519 to 7,012 premature deaths in males and 53 to 1,342 in females, as well as multiple burden of diseases when compared to the baseline meat intake scenario. Comparing lump sum meat intake reduction and outcome measures across the entire population, and using emission factors, and relative risks from individual studies in previous co-benefit studies, this study used age- and sex-specific input and output measures, emission factors and relative risks obtained from high quality meta-analysis and meta-review respectively, and has taken government dietary recommendations into account. Hence evaluations in this study are of better quality and more reflective of real life practices. Further to previous co-benefit studies, this study pinpointed age- and sex-specific population and meat-type-specific intervention points and leverages. When compared with similar studies in Australia, this study also showed that intervention points and leverages among populations in different geographic and cultural background could be different, and that globalization also globalizes meat consumption emission effects. More regional and cultural specific evaluations are recommended to promote more sustainable meat consumption and enhance global food security.

Keywords: Burden of diseases, greenhouse gas emissions, Hong Kong diet, sustainable meat consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
28 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: Apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
27 Clinical and Methodological Issues in the Research on the Rape Myth

Authors: Ana Pauna, Zbigniew Pleszewski

Abstract:

The purpose of this study is to revisit the concept of rape as represented by professionals in the literature as well as its perception (beliefs and attitudes) in the population at large and to propose methodological improvements to its measurement tool. Rape is a serious crime threatening its victim-s physical and mental health and integrity; and as such is legally prosecuted in all modern societies. The problem is not in accepting or rejecting rape as a criminal act, but rather in the vagueness of its interpretations and “justifications" maintained in the mentality of modern societies - known in the literature as the phenomenon of "rape-myth". The rapemyth can be studied from different perspectives: criminology, sociology, ethics, medicine and psychology. Its investigation requires rigorous scientific objectivity, free of passion (victims of rape are at risk of emotional bias), free of activism (social activists, even if wellintentioned are also biased), free of any pre-emptive assumptions or prejudices. To apply a rigorous scientific procedure, we need a solid, valid and reliable measurement. Rape is a form of heterosexual or homosexual aggression, violently forcing the victim to give-in in the sexual activity of the aggressor against her/his will. Human beings always try to “understand" or find a reason justifying their acts. Psychological literature provides multiple clinical and experimental examples of it; just to mention the famous studies by Milgram on the level of electroshock delivered by the “teacher" towards the “learner" if “scientifically justifiable" or the studies on the behavior of “prisoners" and the “guards" and many other experiments and field observations. Sigmund Freud presented the phenomenon of unconscious justification and called it rationalization. The multiple justifications, rationalizations and repeated opinions about sexual behavior contribute to a myth maintained in the society. What kind of “rationale" our societies apply to “understand" the non-consensual sexual behavior? There are many, just to mention few: • Sex is a ludistic activity for both participants, therefore – even if not consented – it should bring pleasure to both. • Everybody wants sex, but only men are allowed to manifest it openly while women have to pretend the opposite, thus men have to initiate sexual behavior and women would follow. • A person who strongly needs sex is free to manifest it and struggle to get it; the person who doesn-t want it must not reveal her/his sexual attraction and avoid risky situations; otherwise she/he is perceived as a promiscuous seducer. • A person who doesn-t fight against the sexual initiator unconsciously accepts the rape (does it explain why homosexual rapes are reported less frequently than rapes against women?). • Women who are raped deserve it because their wardrobe is very revealing and seducing and they ''willingly'' go to highly risky places (alleys, dark roads, etc.). • Men need to ventilate their sexual energy and if they are deprived of a partner their urge to have sex is difficult to control. • Men are supposed to initiate and insist even by force to have sex (their testosterone makes them both sexual and aggressive). The paper overviews numerous cultural beliefs about masculine versus feminine behavior and their impact on the “rape myth".

Keywords: Rape Myth components, psycho-social factors, testing, Likert-type scale

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
26 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application

Authors: D. Berdous, H. Ferfera-Harrar

Abstract:

Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.

Keywords: Antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent hydrogel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
25 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
24 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: Computational mechanics, lower bound method, reinforced concrete slabs, yield-line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095
23 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University

Authors: Pongthep Bunrueng

Abstract:

This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form.

All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline.

Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.

Keywords: Action research, English pronunciation, phonetics, segmental features, suprasegmental features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2853
22 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: Artificial neural network, back-propagation, tide data, training algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
21 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3458
20 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: Hardness, powder metallurgy, Spark plasma sintering, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
19 The Resource-Base View of Organization and Innovation: Recognition of Significant Relationship in an Organization

Authors: Francis Deinmodei W. Poazi, Jasmine O. Tamunosiki-Amadi, Maurice Fems

Abstract:

In recent times the resource-based view (RBV) of strategic management has recorded a sizeable attention yet there has not been a considerable scholarly and managerial discourse, debate and attention. As a result, this paper gives special bit of critical reasoning as well as top-notch analyses and relationship between RBV and organizational innovation. The study examines those salient aspects of RBV that basically have the will power in ensuring the organization's capacity to go for innovative capability. In achieving such fit and standpoint, the paper joins other relevant academic discourse and empirical evidence. To this end, a reasonable amount of contributions in setting the ground running for future empirical researches would have been provided. More so, the study is guided and built on the following strength and significance: Firstly, RBV sees resources as heterogeneity which forms a strong point of strength and allows organisations to gain competitive advantage. In order words, competitive advantage can be achieved or delivered to the organization when resources are distinctively utilized in a valuable manner more than the envisaged competitors of the organization. Secondly, RBV is significantly influential in determining the real resources that are available in the organization with a view to locate capabilities within in order to attract more profitability into the organization when applied. Thus, there will be more sustainable growth and success in the ever competitive and emerging market. Thus, to have succinct description of the basic methodologies, the study adopts both qualitative as well as quantitative approach with a view to have a broad samples of opinion in establishing and identifying key and strategic organizational resources to enable managers of resources to gain a competitive advantage as well as generating a sustainable increase and growth in profit. Furthermore, a comparative approach and analysis was used to examine the performance of RBV within the organization. Thus, the following are some of the findings of the study: it is clear that there is a nexus between RBV and growth of competitively viable organizations. More so, in most parts, organizations have heterogeneous resources domiciled in their organizations but not all organizations as it was specifically and intelligently adopting the tenets of RBV to strengthen heterogeneity of resources which allows organisations to gain competitive advantage. Other findings of this study reveal that of managerial perception of RBV with respect to application and transformation of resources to achieve a profitable end. It is against this backdrop, the importance of RBV cannot be overemphasized; the study is strongly convinced and think that RBV view is one focal and distinct approach that is focused on internal to outside strategy which engenders sourcing or generating resources internally as well as having the quest to apply such internally sourced resources diligently to increase or gain competitive advantage.

Keywords: Competitive advantage, innovation, organisation, recognition, resource-based view.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
18 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 399
17 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: Assessment, hard skills, soft skills, standardized tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803
16 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of external carbon sources. The present study investigated the feasibility of Anammox Hybrid Reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. Experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: Anammox, filter media, kinetics, nitrogen removal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551
15 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities

Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis

Abstract:

New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.

Keywords: Community resilience, problem based learning, project based learning, case study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 968
14 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: Bacteria chromosome, bacterial identification, sequence, primer generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
13 Roughness and Hardness of 60/40 Cu-Zn Alloy

Authors: Pavana Manvikar, G K Purohit

Abstract:

The functional performance of machined components, often, depends on surface topography, hardness, nature of stress and strain induced on the surface, etc. Invariably, surfaces of metallic components obtained by turning, milling, etc., consist of irregularities such as machining marks are responsible for the above. Surface finishing/coating processes used to produce improved surface quality/textures are classified as chip-removal and chip-less processes. Burnishing is chip-less cold working process carried out to improve surface finish, hardness and resistance to fatigue and corrosion; not obtainable by other surface coating and surface treatment processes. It is a very simple, but effective method which improves surface characteristics and is reported to introduce compressive stresses.

Of late, considerable attention is paid to post-machining, finishing operations, such as burnishing. During burnishing the micro-irregularities start to deform plastically, initially the crests are gradually flattened and zones of reduced deformation are formed. When all the crests are deformed, the valleys between the micro-irregularities start moving in the direction of the newly formed surface. The grain structure is then condensed, producing a smoother and harder surface with superior load-carrying and wear-resistant capabilities.

Burnishing can be performed on a lathe with a highly polished ball or roller type tool which is traversed under force over a rotating/stationary work piece. Often, several passes are used to obtain the work piece surface with the desired finish and hardness.

This paper presents the findings of an experimental investigation on the effect of ball burnishing parameters such as, burnishing speed, feed, force and number of passes; on surface roughness (Ra) and micro-hardness (Hv) of a 60/40 copper/zinc alloy, using a 2-level fractional factorial design of experiments (DoE). Mathematical models were developed to predict surface roughness and hardness generated by burnishing in terms of the above process parameters. A ball-type tool, designed and constructed from a high chrome steel material (HRC=63 and Ra=0.012 µm), was used for burnishing of fine-turned cylindrical bars (0.68-0.78µm and 145Hv). They are given by,

 

Ra= 0.305-0.005X1 - 0.0175X2 + 0.0525X4 + 0.0125X1X4 -0.02X2X4 - 0.0375X3X4

 

Hv=160.625 -2.37 5X1 + 5.125X2 + 1.875X3 + 4.375X4 - 1.625X1X4 + 4.375X2X4 - 2.375X3X4

 

High surface microhardness (175HV) was obtained at 400rpm, 2passes, 0.05mm/rev and 15kgf., and high surface finish (0.20µm) was achieved at 30kgf, 0.1mm/rev, 112rpm and single pass. In other words, surface finish improved by 350% and microhardness improved by 21% compared to as machined conditions.

Keywords: Ball burnishing, surface roughness, micro-hardness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532
12 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: Automotive industry, Industry 4.0, internet of things, IATF 16949:2016, measurement system analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993