Search results for: system development lifecycle.
295 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.
Keywords: Base station, clustering algorithm, energy efficient, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 850294 Honey Contamination in the Republic of Kazakhstan
Authors: B. Sadepovich Maikanov, Z. Shabanbayevich Adilbekov, R. Husainovna Mustafina, L. Tyulegenovna Auteleyeva
Abstract:
This study involves detailed information about contaminants of honey in the Republic of Kazakhstan. The requirements of the technical regulation ‘Requirements to safety of honey and bee products’ and GOST 19792-2001 were taken into account in this research. Contamination of honey by antibiotics wqs determined by the IEA (immune-enzyme analysis), Ridder analyzer and Tecna produced test systems. Voltammetry (TaLab device) was used to define contamination by salts of heavy metals and gamma-beta spectrometry, ‘Progress BG’ system, with preliminary ashing of the sample of honey was used to define radioactive contamination. This article pointed out that residues of chloramphenicol were detected in 24% of investigated products, in 22% of them –streptomycin, in 7.3% - sulfanilamide, in 2.4% - tylosin, and in 12% - combined contamination was noted. Geographically, the greatest degree of contamination of honey with antibiotics occurs in the Northern Kazakhstan – 54.4%, and Southern Kazakhstan - 50%, and the lowest in Central and Eastern Kazakhstan with 30% and 25%, respectively. Generally, pollution by heavy metals is within acceptable limits, but the contamination from lead is highest in the Akmola region. The level of radioactive cesium and strontium is also within acceptable concentrations. The highest radioactivity in terms of cesium was observed in the East Kazakhstan region - 49.00±10 Bq/kg, in Akmola, North Kazakhstan and Almaty - 12.00±5, 11.05±3 and 19.0±8 Bq/kg, respectively, while the norm is 100 Bq/kg. In terms of strontium, the radioactivity in the East Kazakhstan region is 25.03±15 Bq/kg, while in Akmola, North Kazakhstan and Almaty regions it is 12.00±3, 10.2±4 and 1.0±2 Bq/kg, respectively, with the norm of 80 Bq/kg. This accumulation is mainly associated with the environmental degradation, feeding and treating of bees. Moreover, in the process of collecting nectar, external substances can penetrate honey. Overall, this research determines factors and reasons of honey contamination.
Keywords: Antibiotics, contamination of honey, honey, radionuclides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707293 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area
Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni
Abstract:
The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367292 Impact of Revenue Reform on Vulnerable Communities in Tonga
Authors: Pauliasi Tony Fakahau
Abstract:
This paper provides an overview of the impact of the revenue reform programme on vulnerable communities in the Kingdom of Tonga. Economic turmoil and mismanagement during the late 1990s forced the government to seek technical and financial assistance from the Asian Development Bank to undertake a comprehensive Economic and Public Sector Reform (EPSR) programme. The EPSR is a Western model recommended by donor agencies as the solution to Tonga’s economic challenges. The EPSR programme included public sector reform, private sector growth, and revenue generation. Tax reform was the main tool for revenue generation, which set out to strengthen tax compliance and administration as well as implement a value-added consumption tax. The EPSR is based on Western values and ideology but failed to recognise that Tongan cultural values are important to the local community. Two participant groups were interviewed. Participant group one consisted of 51 people representing vulnerable communities. Participant group two consisted of six people from the government and business sector who were from the elite of Tongan society. The Kakala Research Methodology provided the framework for the research, and the Talanoa Research Method was used to conduct semi-structured interviews in the homes of the first group and in the workplaces of the second group. The research found a heavy burden of the consumption tax on the purchasing power of participant group one (vulnerable participants), having an impact on nearly every financial transaction they made. Participant group one’s main financial priorities were kavenga fakalotu (obligations to the church), kavenga fakafāmili (obligations to the family) and kavenga fakafonua (obligations to cultural events for the village, nobility, and royalty). The findings identified inequalities of the revenue reform, especially from consumption tax, for vulnerable people and communities compared to the elite of society. The research concluded that government and donor agencies need ameliorating policies to reduce the burden of tax on vulnerable groups more susceptible to the impact of revenue reform.
Keywords: Tax reform, Tonga vulnerable community revenue, revenue reform, public sector reform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286291 Frank Norris’ McTeague: An Entropic Melodrama
Authors: Mohsen Masoomi, Fazel Asadi Amjad, Monireh Arvin
Abstract:
According to Naturalistic principles, human destiny in the form of blind chance and determinism, entraps the individual, so man is a defenceless creature unable to escape from the ruthless paws of a stoical universe. In Naturalism; nonetheless, melodrama mirrors a conscious alternative with a peculiar function. A typical American Naturalistic character thus cannot be a subject for social criticism of American society since they are not victims of the ongoing virtual slavery, capitalist system, nor of a ruined milieu, but of their own volition, and more importantly, their character frailty. Through a Postmodern viewpoint, each Naturalistic work can encompass some entropic trends and changes culminating in an entire failure and devastation. Frank Norris in McTeague displays the futile struggles of ordinary men and how they end up brutes. McTeague encompasses intoxication, abuse, violation, and ruthless homicides. Norris’ depictions of the falling individual as a demon represent the entropic dimension of Naturalistic novels. McTeague’s defeat is somewhat his own fault, the result of his own blunders and resolution, not the result of sheer accident. Throughout the novel, each character is a kind of insane quester indicating McTeague’s decadence and, by inference, the decadence of Western civilisation. McTeague seems to designate Norris’ solicitude for a community fabricated by the elements of human negative demeanours and conducts hauling acute symptoms of infectious dehumanisation. The aim of this article is to illustrate how one specific negative human disposition gradually, like a running fire, can spread everywhere and burn everything in itself. The author applies the concept of entropy metaphorically to describe the individual devolutions that necessarily comprise community entropy in McTeague, a dying universe.
Keywords: Animal imagery, entropy, Gypsy, melodrama.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447290 Political Economy of Integrated Soil Fertility Management in the Okavango Delta, Botswana
Authors: Oluwatoyin D. Kolawole, Oarabile Mogobe, Lapologang Magole
Abstract:
Although many factors play a significant role in agricultural production and productivity, the importance of soil fertility cannot be underestimated. The extent to which small farmers are able to manage the fertility of their farmlands is crucial in agricultural development particularly in sub-Saharan Africa (SSA). This paper assesses the nutrient status of selected farmers’ fields in relation to how government policy addresses the allocation of and access to agricultural inputs (e.g. chemical fertilizers) in a unique social-ecological environment of the Okavango Delta in northern Botswana. It also analyses small farmers and soil scientists’ perceptions about the political economy of integrated soil fertility management (ISFM) in the area. A multi-stage sampling procedure was used to elicit quantitative and qualitative information from 228 farmers and 9 soil researchers through the use of interview schedules and questionnaires, respectively. Knowledge validation workshops and focus group discussions (FGDs) were also used to collect qualitative data from farmers. Thirty-three composite soil samples were collected from 30 farmers’ plots in three farming communities of Makalamabedi, Nokaneng and Mohembo for laboratory analysis. While meeting points exist, farmers and scientists have divergent perspectives on soil fertility management. Laboratory analysis carried out shows that most soils in the wetland and the adjoining dry-land/upland surroundings are low in essential nutrients as well as in cation exchange capacity (CEC). Although results suggest the identification and use of appropriate inorganic fertilizers, the low CEC is an indication that holistic cultural practices, which are beyond mere chemical fertilizations, are critical and more desirable for improved soil health and sustainable livelihoods in the area. Farmers’ age (t= -0.728; p≤0.10); their perceptions about the political economy (t = -0.485; p≤0.01) of ISFM; and their preference for the use of local knowledge in soil fertility management (t = -10.254; p≤0.01) had a significant relationship with how they perceived their involvement in the implementation of ISFM.
Keywords: Access, Botswana, ecology, inputs, Okavango Delta, policy, scientists, small farmers, soil fertility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2568289 Driving What’s Next: The De La Salle Lipa Social Innovation in Quality Education Initiatives
Authors: Dante Jose R. Amisola, Glenford M. Prospero
Abstract:
'Driving What’s Next' is a strong campaign of the new administration of De La Salle Lipa in promoting social innovation in quality education. The new leadership directs social innovation in quality education in the institutional directions and initiatives to address real-world challenges with real-world solutions. This research under study aims to qualify the commitment of the institution to extend the Lasallian quality human and Christian education to all, as expressed in the Institution’s new mission-vision statement. The Classic Grounded Theory methodology is employed in the process of generating concepts in reference to the documents, a series of meetings, focus group discussions and other related activities that account for the conceptualization and formulation of the new mission-vision along with the new education innovation framework. Notably, Driving What’s Next is the emergent theory that encapsulates the commitment of giving quality human and Christian education to all. It directs the new leadership in driving social innovation in quality education initiatives. Correspondingly, Driving What’s Next is continually resolved through four interrelated strategies also termed as the institution's four strategic directions, namely: (1) driving social innovation in quality education, (2) embracing our shared humanity and championing social inclusion and justice initiatives, (3) creating sustainable futures and (4) engaging diverse stakeholders in our shared mission. Significantly, the four strategic directions capture and integrate the 17 UN sustainable development goals, making the innovative curriculum locally and globally relevant. To conclude, the main concern of the new administration and how it is continually resolved, provide meaningful and fun learning experiences and promote a new way of learning in the light of the 21st century skills among the members of the academic community including stakeholders and extended communities at large, which are defined as: learning together and by association (collaboration), learning through engagement (communication), learning by design (creativity) and learning with social impact (critical thinking).
Keywords: De La Salle Lipa, Driving What’s Next, social innovation in quality education, DLSL mission - vision, strategic directions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914288 Haemodynamics Study in Subject Specific Carotid Bifurcation Using FSI
Authors: S. M. Abdul Khader, Anurag Ayachit, Raghuvir Pai, K. A. Ahmed, V. R. K. Rao, S. Ganesh Kamath
Abstract:
The numerical simulation has made tremendous advances in investigating the blood flow phenomenon through elastic arteries. Such study can be useful in demonstrating the disease progression and hemodynamics of cardiovascular diseases such as atherosclerosis. In the present study, patient specific case diagnosed with partially stenosed complete right ICA and normal left carotid bifurcation without any atherosclerotic plaque formation is considered. 3D patient specific carotid bifurcation model is generated based on CT scan data using MIMICS-4.0 and numerical analysis is performed using FSI solver in ANSYS-14.5. The blood flow is assumed to be incompressible, homogenous and Newtonian, while the artery wall is assumed to be linearly elastic. The two-way sequentially coupled transient FSI analysis is performed using FSI solver for three pulse cycles. The hemodynamic parameters such as flow pattern, Wall Shear Stress, pressure contours and arterial wall deformation are studied at the bifurcation and critical zones such as stenosis. The variation in flow behavior is studied throughout the pulse cycle. Also, the simulation results reveal that there is a considerable increase in the flow behavior in stenosed carotid in contrast to the normal carotid bifurcation system. The investigation also demonstrates the disturbed flow pattern especially at the bifurcation and stenosed zone elevating the hemodynamics, particularly during peak systole and later part of the pulse cycle. The results obtained agree well with the clinical observation and demonstrates the potential of patient specific numerical studies in prognosis of disease progression and plaque rupture.Keywords: Fluid-Structure Interaction, arterial stenosis, Wall Shear Stress, Carotid Artery Bifurcation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2296287 A Secure Auditing Framework for Load Balancing in Cloud Environment
Authors: R. Geetha, T. Padmavathy
Abstract:
Security audit is an important aspect or feature to be considered in cloud service customer. It is basically a certification process to audit the controls that deliver the security requirements. Security audits are conducted by trained and qualified staffs that belong to an independent auditing organization. Security audits must be carried as a standard of security controls. Proper check to be made that the cloud user has a proper reporting and logging facilities with the customer's system and hence ensuring appropriate business and operational flow of data through cloud service. We propose a cloud-based secure auditing framework, which enables confided in power to safely store their mystery information on the semi-believed cloud specialist co-ops, and specifically share their mystery information with a wide scope of information recipient, to diminish the key administration intricacy for power proprietors and information collectors. Unique in relation to past cloud-based information framework, data proprietors transfer their mystery information into cloud utilizing static and dynamic evaluating plan. Another propelled determination is, if any information beneficiary needs individual record to download, the information collector will send the solicitation to the expert. The specialist proprietor has the Access Control. At the off probability, the businessman must impart the primary record to the knowledge collector, acknowledge statistics beneficiary solicitation. Once the acknowledgement for the records is over, the recipient downloads the first record and this record shifting time with date and downloading time with date are monitored by the inspector. In addition to deduplication concept, diminished cloud memory area using dynamic document distribution has been proposed.
Keywords: Cloud computing, cloud storage auditing, data integrity, key exposure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1169286 Oil-Water Two-Phase Flow Characteristics in Horizontal Pipeline – A Comprehensive CFD Study
Authors: Anand B. Desamala, Ashok Kumar Dasamahapatra, Tapas K. Mandal
Abstract:
In the present work, detailed analysis on flow characteristics of a pair of immiscible liquids through horizontal pipeline is simulated by using ANSYS FLUENT 6.2. Moderately viscous oil and water (viscosity ratio = 107, density ratio = 0.89 and interfacial tension = 0.024 N/m) have been taken as system fluids for the study. Volume of Fluid (VOF) method has been employed by assuming unsteady flow, immiscible liquid pair, constant liquid properties, and co-axial flow. Meshing has been done using GAMBIT. Quadrilateral mesh type has been chosen to account for the surface tension effect more accurately. From the grid independent study, we have selected 47037 number of mesh elements for the entire geometry. Simulation successfully predicts slug, stratified wavy, stratified mixed and annular flow, except dispersion of oil in water, and dispersion of water in oil. Simulation results are validated with horizontal literature data and good conformity is observed. Subsequently, we have simulated the hydrodynamics (viz., velocity profile, area average pressure across a cross section and volume fraction profile along the radius) of stratified wavy and annular flow at different phase velocities. The simulation results show that in the annular flow, total pressure of the mixture decreases with increase in oil velocity due to the fact that pipe cross section is completely wetted with water. Simulated oil volume fraction shows maximum at the centre in core annular flow, whereas, in stratified flow, maximum value appears at upper side of the pipeline. These results are in accord with the actual flow configuration. Our findings could be useful in designing pipeline for transportation of crude oil.
Keywords: CFD, Horizontal pipeline, Oil-water flow, VOF technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5711285 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs
Authors: Muhammad Yasir Wadood, Fatemeh Babaeian
Abstract:
By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.
Keywords: Band-pass filters, inter-digital filter, microstrip, via-less.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 838284 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement
Authors: Magdi Elmessiry, Adel Elmessiry
Abstract:
The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.Keywords: Fashion, infringement, Blockchain, artificial intelligence, textiles supply.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1243283 The Concept of Birthday: A Theoretical, Historical, and Social Overview, in Judaism and Other Cultures
Authors: Orly Redlich
Abstract:
In the age of social distance, which has been added to an individual and competitive worldview, it has become important to find a way to promote closeness and personal touch. The sense of social belonging and the existence of positive interaction with others have recently become a considerable necessity. Therefore, this theoretical paper will review one of the familiar and common concepts among different cultures around the world – birthday. This paper has a theoretical contribution that deepens the understanding of the birthday concept. Birthday rituals are historical and universal events, which noted since the prehistoric eras. In ancient history, birthday rituals were solely reserved for kings and nobility members, but over the years, birthday celebrations have evolved into a worldwide tradition. Some of the familiar birthday customs and symbols are currently common among most cultures, while some cultures have adopted for themselves unique birthday customs, which characterized their values and traditions. The birthday concept has a unique significance in Judaism as well, historically, religiously, and socially: It is considered as a lucky day and a private holiday for the celebrant. Therefore, the present paper reviews diverse birthday customs around the world in different cultures, including Judaism, and marks important birthdays throughout history. The paper also describes how the concept of birthday appears over the years in songs, novels, and art, and presents quotes from distinguished sages. The theoretical review suggests that birthday has a special meaning as a time-mark in the cycle of life, and as a socialization means in human development. Moreover, the birthday serves as a symbol of belonging and group cohesiveness, a day in which the celebrant's sense of belonging and sense of importance are strengthened and nurtured. Thus, the reappearance of these elements in a family or group interaction during the birthday ceremony allows the celebrant to absorb positive impressions about himself. In view of the extensive theoretical review, it seems that the unique importance of birthdays can serve as the foundation for intervention programs that may affect the participants’ sense of belonging and empowerment. In the group aspect, perhaps it can also yield therapeutic factors within a group. Concrete recommendations are presented at the end of the paper.
Keywords: Birthday, universal events, rituals, positive interaction, group cohesiveness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272282 Developing a Research Culture in the Faculty of Engineering and Information Technology at the Central University of Technology, Free State: Implications for Knowledge Management
Authors: Mpho A. Mbeo, Patient Rambe
Abstract:
The 13th year of the Central University of Technology, Free State’s (CUT) transition from a vocational and professional training orientation institution (i.e. a technikon) into a university with a strong research focus has neither been a smooth nor an easy one. At the heart of this transition was the need to transform the psychological faculties of academic and research staffs compliment who were accustomed to training graduates for industrial placement. The lack of a research culture that fully embraces the strong solid ethos of conducting cutting-edge research needs to be addressed. The induction and socialisation of academic staff into the development and execution of cutting-edge research also required the provision of research support and the creation of a conducive academic environment for research, both for emerging and non-research active academics. Drawing on ten cases, consisting of four heads of departments, three seasoned researchers, and three novice researchers, this study explores the challenges faced in establishing a strong research culture at the university. Furthermore, it gives an account of the extent to which the current research interventions have addressed the perceivably “missing research culture”, and the implications of these interventions for knowledge management. Evidence suggests that the capability of an ideal institutional research environment, consisting of mentorship of novice researchers by seasoned researchers, balanced effort into teaching and research responsibilities, should be supported by strong research-oriented leadership. Furthermore, recruitment of research passionate staff, adoption of a salary structure that encourages the retention of excellent scholars should be matched by a coherent research incentive culture to growth research publication outputs. This is critical for building new knowledge and entrenching knowledge management founded on communities of practice and scholarly networking through the documentation and communication of research findings. The study concludes that the multiple policy documents set for the different domains of research may be creating pressure on researchers to engage research activities and increase output at the expense of research quality.
Keywords: Central University of Technology, performance, publication, research culture, university.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 336281 A Damage Level Assessment Model for Extra High Voltage Transmission Towers
Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang
Abstract:
Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067280 Microwave-Assisted Alginate Extraction from Portuguese Saccorhiza polyschides – Influence of Acid Pretreatment
Authors: Mário Silva, Filipa Gomes, Filipa Oliveira, Simone Morais, Cristina Delerue-Matos
Abstract:
Brown seaweeds are abundant in Portuguese coastline and represent an almost unexploited marine economic resource. One of the most common species, easily available for harvesting in the northwest coast, is Saccorhiza polyschides grows in the lowest shore and costal rocky reefs. It is almost exclusively used by local farmers as natural fertilizer, but contains a substantial amount of valuable compounds, particularly alginates, natural biopolymers of high interest for many industrial applications. Alginates are natural polysaccharides present in cell walls of brown seaweed, highly biocompatible, with particular properties that make them of high interest for the food, biotechnology, cosmetics and pharmaceutical industries. Conventional extraction processes are based on thermal treatment. They are lengthy and consume high amounts of energy and solvents. In recent years, microwave-assisted extraction (MAE) has shown enormous potential to overcome major drawbacks that outcome from conventional plant material extraction (thermal and/or solvent based) techniques, being also successfully applied to the extraction of agar, fucoidans and alginates. In the present study, acid pretreatment of brown seaweed Saccorhiza polyschides for subsequent microwave-assisted extraction (MAE) of alginate was optimized. Seaweeds were collected in Northwest Portuguese coastal waters of the Atlantic Ocean between May and August, 2014. Experimental design was used to assess the effect of temperature and acid pretreatment time in alginate extraction. Response surface methodology allowed the determination of the optimum MAE conditions: 40 mL of HCl 0.1 M per g of dried seaweed with constant stirring at 20ºC during 14h. Optimal acid pretreatment conditions have enhanced significantly MAE of alginates from Saccorhiza polyschides, thus contributing for the development of a viable, more environmental friendly alternative to conventional processes.
Keywords: Acid pretreatment, Alginate, Brown seaweed, Microwave-assisted extraction, Response surface methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3348279 Status of Thyroid Function and Iron Overload in Adolescents and Young Adults with Beta- Thalassemia Major Treated with Deferoxamine in Jordan
Authors: Fawzi Irshaid, Kamal Mansi
Abstract:
Thyroid dysfunction is one of the most frequently reported complications of chronic blood transfusion therapy in patients with beta-thalassemia major (BTM). However, the occurrence of thyroid dysfunction and its possible association with iron overload in BTM patients is still under debate. Therefore, this study aimed to investigate the status of thyroid functions and iron overload in adolescent and young adult patients with BTM in Jordan population. Thirty six BTM patients aged 12-28 years and matched controls were included in this study. All patients have been receiving frequent blood transfusion to maintain pretransfusion hemoglobin concentration above 10 g dl-1 and deferoxamine at a dose of 45 mg kg-1 day-1 (8 h, 5-7 days/week) by subcutaneous infusion. Blood samples were drawn from patients and controls. The status of thyroid functions and iron overload was evaluated by measurements of serum free thyroxine (FT4), triiodothyronine (FT3), thyrotropin (TSH) and serum ferritin level. A number of some hematological and biochemical parameters were also measured. It was found that hematocrit, serum ferritin, hemoglobin, FT3 and zinc, copper mean values were significantly higher in the patients than in the controls (P< 0.05). On other hand, leukocyte, FT4 and TSH mean values were similar to that of the controls. In addition, our data also indicated that all of the above examined parameters were not significantly affected by the patient-s age and gender. Deferoxamine approach for removing excess iron from our BTM patient did not normalize the values of serum ferritin, copper and zinc, suggesting poor compliance with deferoxamine chelation therapy. Thus, we recommend the use of a combination of deferoxamine and deferiprone to reduce the risk of excess of iron in our patients. Furthermore, thyroid dysfunction appears to be a rare complication, because our patients showed normal mean levels for serum TSH and FT4. However, high mean levels of serum ferritin, zinc, copper might be seen as potential risk factors for initiation and development of thyroid dysfunctions and other diseases. Therefore, further studies must be carried out at yearly intervals with large sample number, to detect subclinical thyroid dysfunction cases.Keywords: beta-thalassemia major, deferoxamine, iron overload, triiodothyronine, zinc.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826278 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). Changing the connection of the stator windings from delta to star at no load can achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.
Keywords: Artificial Neural Network, ANN, Energy Saving Mode, ESM, Induction Motor, IM, star/delta switch, supervisory control, fluid transportation, reliability, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 390277 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668276 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep
Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk
Abstract:
The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.
Keywords: Autochthonous Miocene, Carpathian Foredeep, Poland, shale gas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748275 Enhancing Teaching of Engineering Mathematics
Authors: Tajinder Pal Singh
Abstract:
Teaching of mathematics to engineering students is an open ended problem in education. The main goal of mathematics learning for engineering students is the ability of applying a wide range of mathematical techniques and skills in their engineering classes and later in their professional work. Most of the undergraduate engineering students and faculties feels that no efforts and attempts are made to demonstrate the applicability of various topics of mathematics that are taught thus making mathematics unavoidable for some engineering faculty and their students. The lack of understanding of concepts in engineering mathematics may hinder the understanding of other concepts or even subjects. However, for most undergraduate engineering students, mathematics is one of the most difficult courses in their field of study. Most of the engineering students never understood mathematics or they never liked it because it was too abstract for them and they could never relate to it. A right balance of application and concept based teaching can only fulfill the objectives of teaching mathematics to engineering students. It will surely improve and enhance their problem solving and creative thinking skills. In this paper, some practical (informal) ways of making mathematics-teaching application based for the engineering students is discussed. An attempt is made to understand the present state of teaching mathematics in engineering colleges. The weaknesses and strengths of the current teaching approach are elaborated. Some of the causes of unpopularity of mathematics subject are analyzed and a few pragmatic suggestions have been made. Faculty in mathematics courses should spend more time discussing the applications as well as the conceptual underpinnings rather than focus solely on strategies and techniques to solve problems. They should also introduce more ‘word’ problems as these problems are commonly encountered in engineering courses. Overspecialization in engineering education should not occur at the expense of (or by diluting) mathematics and basic sciences. The role of engineering education is to provide the fundamental (basic) knowledge and to teach the students simple methodology of self-learning and self-development. All these issues would be better addressed if mathematics and engineering faculty join hands together to plan and design the learning experiences for the students who take their classes. When faculties stop competing against each other and start competing against the situation, they will perform better. Without creating any administrative hassles these suggestions can be used by any young inexperienced faculty of mathematics to inspire engineering students to learn engineering mathematics effectively.
Keywords: Application based learning, conceptual learning, engineering mathematics, word problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2294274 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: Lexicon of disasters, modelling, Petri nets, text annotation, social disasters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158273 Chatter Stability Characterization of Full-Immersion End-Milling Using a Generalized Modified Map of the Full-Discretization Method, Part 1: Validation of Results and Study of Stability Lobes by Numerical Simulation
Authors: Chigbogu G. Ozoegwu, Sam N. Omenyi
Abstract:
The objective in this work is to generate and discuss the stability results of fully-immersed end-milling process with parameters; tool mass m=0.0431kg,tool natural frequency ωn = 5700 rads^-1, damping factor ξ=0.002 and workpiece cutting coefficient C=3.5x10^7 Nm^-7/4. Different no of teeth is considered for the end-milling. Both 1-DOF and 2-DOF chatter models of the system are generated on the basis of non-linear force law. Chatter stability analysis is carried out using a modified form (generalized for both 1-DOF and 2-DOF models) of recently developed method called Full-discretization. The full-immersion three tooth end-milling together with higher toothed end-milling processes has secondary Hopf bifurcation lobes (SHBL’s) that exhibit one turning (minimum) point each. Each of such SHBL is demarcated by its minimum point into two portions; (i) the Lower Spindle Speed Portion (LSSP) in which bifurcations occur in the right half portion of the unit circle centred at the origin of the complex plane and (ii) the Higher Spindle Speed Portion (HSSP) in which bifurcations occur in the left half portion of the unit circle. Comments are made regarding why bifurcation lobes should generally get bigger and more visible with increase in spindle speed and why flip bifurcation lobes (FBL’s) could be invisible in the low-speed stability chart but visible in the high-speed stability chart of the fully-immersed three-tooth miller.
Keywords: Chatter, flip bifurcation, modified full-discretization map stability lobe, secondary Hopf bifurcation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832272 Structural Parsing of Natural Language Text in Tamil Using Phrase Structure Hybrid Language Model
Authors: Selvam M, Natarajan. A M, Thangarajan R
Abstract:
Parsing is important in Linguistics and Natural Language Processing to understand the syntax and semantics of a natural language grammar. Parsing natural language text is challenging because of the problems like ambiguity and inefficiency. Also the interpretation of natural language text depends on context based techniques. A probabilistic component is essential to resolve ambiguity in both syntax and semantics thereby increasing accuracy and efficiency of the parser. Tamil language has some inherent features which are more challenging. In order to obtain the solutions, lexicalized and statistical approach is to be applied in the parsing with the aid of a language model. Statistical models mainly focus on semantics of the language which are suitable for large vocabulary tasks where as structural methods focus on syntax which models small vocabulary tasks. A statistical language model based on Trigram for Tamil language with medium vocabulary of 5000 words has been built. Though statistical parsing gives better performance through tri-gram probabilities and large vocabulary size, it has some disadvantages like focus on semantics rather than syntax, lack of support in free ordering of words and long term relationship. To overcome the disadvantages a structural component is to be incorporated in statistical language models which leads to the implementation of hybrid language models. This paper has attempted to build phrase structured hybrid language model which resolves above mentioned disadvantages. In the development of hybrid language model, new part of speech tag set for Tamil language has been developed with more than 500 tags which have the wider coverage. A phrase structured Treebank has been developed with 326 Tamil sentences which covers more than 5000 words. A hybrid language model has been trained with the phrase structured Treebank using immediate head parsing technique. Lexicalized and statistical parser which employs this hybrid language model and immediate head parsing technique gives better results than pure grammar and trigram based model.Keywords: Hybrid Language Model, Immediate Head Parsing, Lexicalized and Statistical Parsing, Natural Language Processing, Parts of Speech, Probabilistic Context Free Grammar, Tamil Language, Tree Bank.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3644271 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria
Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar
Abstract:
Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.
Keywords: Biocapacity, carbon footprint, ecological footprint assessment, energy consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904270 The Impact of Protein Content on Athletes’ Body Composition
Authors: G. Vici, L. Cesanelli, L. Belli, R. Ceci, V. Polzonetti
Abstract:
Several factors contribute to success in sport and diet is one of them. Evidence-based sport nutrition guidelines underline the importance of macro- and micro-nutrients’ balance and timing in order to improve athlete’s physical status and performance. Nevertheless, a high content of proteins is commonly found in resistance training athletes’ diet with carbohydrate intake that is not enough or not well planned. The aim of the study was to evaluate the impact of different protein and carbohydrate diet contents on body composition and sport performance on a group of resistance training athletes. Subjects were divided as study group (n=16) and control group (n=14). For a period of 4 months, both groups were subjected to the same resistance training fitness program with study group following a specific diet and control group following an ab libitum diet. Body compositions were evaluated trough anthropometric measurement (weight, height, body circumferences and skinfolds) and Bioimpedence Analysis. Physical strength and training status of individuals were evaluated through the One Repetition Maximum test (RM1). Protein intake in studied group was found to be lower than in control group. There was a statistically significant increase of body weight, free fat mass and body mass cell of studied group respect to the control group. Fat mass remains almost constant. Statistically significant changes were observed in quadriceps and biceps circumferences, with an increase in studied group. The MR1 test showed improvement in study group’s strength but no changes in control group. Usually people consume hyper-proteic diet to achieve muscle mass development. Through this study, it was possible to show that protein intake fixed at 1,7 g/kg/d can meet the individual's needs. In parallel, the increased intake of carbohydrates, focusing on quality and timing of assumption, has enabled the obtainment of desired results with a training protocol supporting a hypertrophic strategy. Therefore, the key point seems related to the planning of a structured program both from a nutritional and training point of view.Keywords: Body composition, diet, exercise, protein.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077269 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650268 Study on Seismic Performance of Reinforced Soil Walls to Modify the Pseudo Static Method
Authors: Majid Yazdandoust
Abstract:
This study, tries to suggest a design method based on displacement using finite difference numerical modeling in reinforcing soil retaining wall with steel strip. In this case, dynamic loading characteristics such as duration, frequency, peak ground acceleration, geometrical characteristics of reinforced soil structure and type of the site are considered to correct the pseudo static method and finally introduce the pseudo static coefficient as a function of seismic performance level and peak ground acceleration. For this purpose, the influence of dynamic loading characteristics, reinforcement length, height of reinforced system and type of the site are investigated on seismic behavior of reinforcing soil retaining wall with steel strip. Numerical results illustrate that the seismic response of this type of wall is highly dependent to cumulative absolute velocity, maximum acceleration, and height and reinforcement length so that the reinforcement length can be introduced as the main factor in shape of failure. Considering the loading parameters, geometric parameters of the wall and type of the site showed that the used method in this study leads to efficient designs in comparison with other methods, which are usually based on limit-equilibrium concept. The outputs show the over-estimation of equilibrium design methods in comparison with proposed displacement based methods here.Keywords: Pseudo static coefficient, seismic performance design, numerical modeling, steel strip reinforcement, retaining walls, cumulative absolute velocity, failure shape.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153267 Synthesis and Characterization of ZnO and Fe3O4 Nanocrystals from Oleat-based Organometallic Compounds
Authors: PoiSim Khiew, WeeSiong Chiu, ThianKhoonTan, Shahidan Radiman, Roslan Abd-Shukor, Muhammad Azmi Abd-Hamid, ChinHua Chia
Abstract:
Magnetic and semiconductor nanomaterials exhibit novel magnetic and optical properties owing to their unique size and shape-dependent effects. With shrinking the size down to nanoscale region, various anomalous properties that normally not present in bulk start to dominate. Ability in harnessing of these anomalous properties for the design of various advance electronic devices is strictly dependent on synthetic strategies. Hence, current research has focused on developing a rational synthetic control to produce high quality nanocrystals by using organometallic approach to tune both size and shape of the nanomaterials. In order to elucidate the growth mechanism, transmission electron microscopy was employed as a powerful tool in performing real time-resolved morphologies and structural characterization of magnetic (Fe3O4) and semiconductor (ZnO) nanocrystals. The current synthetic approach is found able to produce nanostructures with well-defined shapes. We have found that oleic acid is an effective capping ligand in preparing oxide-based nanostructures without any agglomerations, even at high temperature. The oleate-based precursors and capping ligands are fatty acid compounds, which are respectively originated from natural palm oil with low toxicity. In comparison with other synthetic approaches in producing nanostructures, current synthetic method offers an effective route to produce oxide-based nanomaterials with well-defined shapes and good monodispersity. The nanocystals are well-separated with each other without any stacking effect. In addition, the as-synthesized nanopellets are stable in terms of chemically and physically if compared to those nanomaterials that are previous reported. Further development and extension of current synthetic strategy are being pursued to combine both of these materials into nanocomposite form that will be used as “smart magnetic nanophotocatalyst" for industry waste water treatment.Keywords: Metal oxide nanomaterials, Nanophotocatalyst, Organometallic synthesis, Morphology Control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2596266 Turbine Follower Control Strategy Design Based on Developed FFPP Model
Authors: Ali Ghaffari, Mansour Nikkhah Bahrami, Hesam Parsa
Abstract:
In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.Keywords: Attemperator, Evolutionary algorithms, Fossil fuelled power plant (FFPP), Fuzzy set theory, Gain scheduling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794