Search results for: travel salesman problem
5982 Industrial Wastewater Sludge Treatment in Chongqing, China
Authors: Victor Emery David Jr., Jiang Wenchao, Yasinta John, Md. Sahadat Hossain
Abstract:
Sludge originates from the process of treatment of wastewater. It is the byproduct of wastewater treatment containing concentrated heavy metals and poorly biodegradable trace organic compounds, as well as potentially pathogenic organisms (viruses, bacteria, etc.) which are usually difficult to treat or dispose of. China, like other countries, is no stranger to the challenges posed by an increase of wastewater. Treatment and disposal of sludge have been a problem for most cities in China. However, this problem has been exacerbated by other issues such as lack of technology, funding, and other factors. Suitable methods for such climatic conditions are still unavailable for modern cities in China. Against this background, this paper seeks to describe the methods used for treatment and disposal of sludge from industries and suggest a suitable method for treatment and disposal in Chongqing/China. From the research conducted, it was discovered that the highest treatment rate of sludge in Chongqing was 10.08%. The industrial waste piping system is not separated from the domestic system. Considering the proliferation of industry and urbanization, there is a likelihood that the production of sludge in Chongqing will increase. If the sludge produced is not properly managed, this may lead to adverse health and environmental effects. Disposal costs and methods for Chongqing were also included in this paper’s analysis. Research showed that incineration is the most expensive method of sludge disposal in China/Chongqing. Subsequent research, therefore, considered optional alternatives such as composting. Composting represents a relatively cheap waste disposal method considering the vast population, current technology and economic conditions of Chongqing, as well as China at large.Keywords: Chongqing/China, disposal, industrial, sludge, treatment
Procedia PDF Downloads 3215981 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 3965980 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 1685979 The Problem of Suffering: Job, The Servant and Prophet of God
Authors: Barbara Pemberton
Abstract:
Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.Keywords: suffering, Job, Qur'an, tanakh
Procedia PDF Downloads 1865978 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 2565977 The Development of Directed-Project Based Learning as Language Learning Model to Improve Students' English Achievement
Authors: Tri Pratiwi, Sufyarma Marsidin, Hermawati Syarif, Yahya
Abstract:
The 21st-century skills being highly promoted today are Creativity and Innovation, Critical Thinking and Problem Solving, Communication and Collaboration. Communication Skill is one of the essential skills that should be mastered by the students. To master Communication Skills, students must first master their Language Skills. Language Skills is one of the main supporting factors in improving Communication Skills of a person because by learning Language Skills students are considered capable of communicating well and correctly so that the message or how to deliver the message to the listener can be conveyed clearly and easily understood. However, it cannot be denied that English output or learning outcomes which are less optimal is the problem which is frequently found in the implementation of the learning process. This research aimed to improve students’ language skills by developing learning model in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL) implementation. This study is designed in Research and Development (R & D) using ADDIE model development. The researcher collected data through observation, questionnaire, interview, test, and documentation which were then analyzed qualitatively and quantitatively. The results showed that DPjBL is effective to use, it is seen from the difference in value between the pretest and posttest of the control class and the experimental class. From the results of a questionnaire filled in general, the students and teachers agreed to DPjBL learning model. This learning model can increase the students' English achievement.Keywords: language skills, learning model, Directed-Project Based Learning (DPjBL), English achievement
Procedia PDF Downloads 1655976 Nutritional Status of Middle School Students and Their Selected Eating Behaviours
Authors: K. Larysz, E. Grochowska-Niedworok, M. Kardas, K. Brukalo, B. Calyniuk, R. Polaniak
Abstract:
Eating behaviours and habits are one of the main factors affecting health. Abnormal nutritional status is a growing problem related to nutritional errors. The number of adolescents presenting excess body weight is also rising. The body's demand for all nutrients increases in the period of intensive development, i.e., during puberty. A varied, well-balanced diet and elimination of unhealthy habits are two of the key factors that contribute to the proper development of a young body. The aim of the study was to assess the nutritional status and selected eating behaviours/habits in adolescents attending middle school. An original questionnaire including 24 questions was conducted. A total of 401 correctly completed questionnaires were qualified for the assessment. Body mass index (BMI) was calculated. Furthermore, the frequency of breakfast consumption, the number of meals per day, types of snacks and sweetened beverages, as well as the frequency of consuming fruit and vegetables, dairy products and fast-foods were assessed. The obtained results were analysed statistically. The study showed that malnutrition was more of a problem than overweight or obesity among middle school students. More than 71% of middle school students have breakfast, whereas almost 30% of adolescents skip this meal. Up to 57.6% of respondents most often consume sweets at school. A total of 37% of adolescents consume sweetened beverages daily or almost every day. Most of the respondents consume an optimal number of meals daily. Only 24.7% of respondents consume fruit and vegetables more than once daily. The majority of respondents (49.40%) declared that they consumed fast food several times a month. Satisfactory frequency of consuming dairy products was reported by 32.7% of middle school students. Conclusions of our study: 1. Malnutrition is more of a problem than overweight or obesity among middle school students. They consume excessive amounts of sweets, sweetened beverages, and fast foods. 2. The consumption of fruit and vegetables was too low in the study group. The intake of dairy products was also low in some cases. 3. A statistically significant correlation was found between the frequency of fast food consumption and the intake of sweetened beverages. A low correlation was found between nutritional status and the number of meals per day. The number of meals consumed by these individuals decreased with increasing nutritional status.Keywords: adolescent, malnutrition, nutrition, nutritional status, obesity
Procedia PDF Downloads 1355975 The Effects of Science, Technology, Engineering and Math Problem-Based Learning on Native Hawaiians and Other Underrepresented, Low-Income, Potential First-Generation High School Students
Authors: Nahid Nariman
Abstract:
The prosperity of any nation depends on its ability to use human potential, in particular, to offer an education that builds learners' competencies to become effective workforce participants and true citizens of the world. Ever since the Second World War, the United States has been a dominant player in the world politically, economically, socially, and culturally. The rapid rise of technological advancement and consumer technologies have made it clear that science, technology, engineering, and math (STEM) play a crucial role in today’s world economy. Exploring the top qualities demanded from new hires in the industry—i.e., problem-solving skills, teamwork, dependability, adaptability, technical and communication skills— sheds light on the kind of path that is needed for a successful educational system to effectively support STEM. The focus of 21st century education has been to build student competencies by preparing them to acquire and apply knowledge, to think critically and creatively, to competently use information, be able to work in teams, to demonstrate intellectual and moral values as well as cultural awareness, and to be able to communicate. Many educational reforms pinpoint various 'ideal' pathways toward STEM that educators, policy makers, and business leaders have identified for educating the workforce of tomorrow. This study will explore how problem-based learning (PBL), an instructional strategy developed in the medical field and adopted with many successful results in K-12 through higher education, is the proper approach to stimulate underrepresented high school students' interest in pursuing STEM careers. In the current study, the effect of a problem-based STEM model on students' attitudes and career interests was investigated using qualitative and quantitative methods. The participants were 71 low-income, native Hawaiian high school students who would be first-generation college students. They were attending a summer STEM camp developed as the result of a collaboration between the University of Hawaii and the Upward Bound Program. The project, funded by the National Science Foundation's Innovative Technology Experiences for Students and Teachers (ITEST) program, used PBL as an approach in challenging students to engage in solving hands-on, real-world problems in their communities. Pre-surveys were used before camp and post-surveys on the last day of the program to learn about the implementation of the PBL STEM model. A Career Interest Questionnaire provided a way to investigate students’ career interests. After the summer camp, a representative selection of students participated in focus group interviews to discuss their opinions about the PBL STEM camp. The findings revealed a significantly positive increase in students' attitudes towards STEM disciplines and STEM careers. The students' interview results also revealed that students identified PBL to be an effective form of instruction in their learning and in the development of their 21st-century skills. PBL was acknowledged for making the class more enjoyable and for raising students' interest in STEM careers, while also helping them develop teamwork and communication skills in addition to scientific knowledge. As a result, the integration of PBL and a STEM learning experience was shown to positively affect students’ interest in STEM careers.Keywords: problem-based learning, science education, STEM, underrepresented students
Procedia PDF Downloads 1245974 A Collective Intelligence Approach to Safe Artificial General Intelligence
Authors: Craig A. Kaplan
Abstract:
If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety
Procedia PDF Downloads 925973 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis
Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan
Abstract:
Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis
Procedia PDF Downloads 885972 Attitude-Behavior Consistency: A Descriptive Study in the Context of Climate Change and Acceptance of Psychological Findings by the Public
Authors: Nita Mitra, Pranab Chanda
Abstract:
In this paper, the issue of attitude-behavior consistency has been addressed in the context of climate change. Scientists (about 98 percent) opine that human behavior has a significant role in climate change. Such climate changes are harmful for human life. Thus, it is natural to conclude that only change of human behavior can avoid harmful consequences. Government and Non-Government Organizations are taking steps to bring in the desired changes in behavior. However, it seems that although the efforts are achieving changes in the attitudes to some degree, those steps are failing to materialize the corresponding behavioral changes. This has been a great concern for environmentalists. Psychologists have noticed the problem as a particular case of the general psychological problem of making attitude and behavior consistent with each other. The present study is in continuation of a previous work of the same author based upon descriptive research on the status of attitude and behavior of the people of a foot-hill region of the Himalayas in India regarding climate change. The observations confirm the mismatch of attitude and behavior of the people of the region with respect to climate change. While doing so an attitude-behavior mismatch has been noticed with respect to the acceptance of psychological findings by the public. People have been found to be interested in Psychology as an important subject, but they are reluctant to take the observations of psychologists seriously. A comparative study in this regard has been made with similar studies done elsewhere. Finally, an attempt has been made to perceive observations in the framework of observational learning due to Bandura's and behavior change due to Lewin.Keywords: acceptance of psychological variables, attitude-behavior consistency, behavior change, climate change, observational learning
Procedia PDF Downloads 1575971 Climate Change: A Critical Analysis on the Relationship between Science and Policy
Authors: Paraskevi Liosatou
Abstract:
Climate change is considered to be of global concern being amplified by the fact that by its nature, cannot be spatially limited. This fact makes necessary the intergovernmental decision-making procedures. In the intergovernmental level, the institutions such as the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change develop efforts, methods, and practices in order to plan and suggest climate mitigation and adaptation measures. These measures are based on specific scientific findings and methods making clear the strong connection between science and policy. In particular, these scientific recommendations offer a series of practices, methods, and choices mitigating the problem by aiming at the indirect mitigation of the causes and the factors amplifying climate change. Moreover, modern production and economic context do not take into consideration the social, political, environmental and spatial dimensions of the problem. This work studies the decision-making process working in international and European level. In this context, this work considers the policy tools that have been implemented by various intergovernmental organizations. The methodology followed is based mainly on the critical study of standards and process concerning the connections and cooperation between science and policy as well as considering the skeptic debates developed. The finding of this work focuses on the links between science and policy developed by the institutional and scientific mechanisms concerning climate change mitigation. It also analyses the dimensions and the factors of the science-policy framework; in this way, it points out the causes that maintain skepticism in current scientific circles.Keywords: climate change, climate change mitigation, climate change skepticism, IPCC, skepticism
Procedia PDF Downloads 1365970 Local Identities to Global in the Centre of Isan, Thailand: Promoting Local Development and Community Participation
Authors: Thammanoon Raveepong, Craig Wheway
Abstract:
Originating from a multifaceted research project beginning with the opening of the Green Market at Ban Laow sub-district, Kosum Phisai, Mahasarakham with the support of Kosum Phisai Governor. The project involves key stakeholders related to villagers who have become involved with linking local identity to a more global identity to help ameliorate falling agricultural incomes and casualised work. There have been fifteen formal meetings involving local government stakeholders that took place at the local university, local schools, a public meeting at Ban-Don-Toom and Village meeting shelters. These events hosted 176 local stakeholders consisting of the District Governor, 7 Chairpersons/Heads of the District Development Council, a Health Promotion group, District retired government staff, 4 sub-district local government members, the City Development Council, 2 representatives from Mahasarakham Provincial Culture Council, 4 principles of all local schools, 11 village heads, 15 scholars form local and national universities, 132 villagers and 4 staff from public relation units. The goal of the project was to initiate a variety of local projects including promotion of Local healthy food, farm/homestay accommodation, local uniqueness, Travel guides (in book form and guide youths) and the proposed development of community tourism with the aim to utilise local people and activities to tap into the growing alternative tourism market. This paper aims to document the progress thus far, and the challenges presented working with local communities that have lacked expertise in linking to the global economy to derive economic benefits for their communities.Keywords: Community-based tourism, community participation, local identity, mahasarakham province
Procedia PDF Downloads 3385969 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 1405968 Evaluation of Surface Roughness Condition Using App Roadroid
Authors: Diego de Almeida Pereira
Abstract:
The roughness index of a road is considered the most important parameter about the quality of the pavement, as it has a close relation with the comfort and safety of the road users. Such condition can be established by means of functional evaluation of pavement surface deviations, measured by the International Roughness Index (IRI), an index that came out of the international evaluation of pavements, coordinated by the World Bank, and currently owns, as an index of limit measure, for purposes of receiving roads in Brazil, the value of 2.7 m/km. This work make use of the e.IRI parameter, obtained by the Roadroid app. for smartphones which use Android operating system. The choice of such application is due to the practicality for the user interaction, as it possesses a data storage on a cloud of its own, and the support given to universities all around the world. Data has been collected for six months, once in each month. The studies begun in March 2018, season of precipitations that worsen the conditions of the roads, besides the opportunity to accompany the damage and the quality of the interventions performed. About 350 kilometers of sections of four federal highways were analyzed, BR-020, BR-040, BR-060 and BR-070 that connect the Federal District (area where Brasilia is located) and surroundings, chosen for their economic and tourist importance, been two of them of federal and two others of private exploitation. As well as much of the road network, the analyzed stretches are coated of Hot Mix Asphalt (HMA). Thus, this present research performs a contrastive discussion between comfort conditions and safety of the roads under private exploitation in which users pay a fee to the concessionaires so they could travel on a road that meet the minimum requirements for usage, and regarding the quality of offered service on the roads under Federal Government jurisdiction. And finally, the contrast of data collected by National Department of Transport Infrastructure – DNIT, by means of a laser perfilometer, with data achieved by Roadroid, checking the applicability, the practicality and cost-effective, considering the app limitations.Keywords: roadroid, international roughness index, Brazilian roads, pavement
Procedia PDF Downloads 855967 Climate Change and Food Security: The Legal Aspects with Special Focus on the European Union
Authors: M. Adamczak-Retecka, O. Hołub-Śniadach
Abstract:
Dangerous of climate change is now global problem and as such has a strategic priority also for the European Union. Europe and European citizens try to do their best to cut greenhouse gas emissions, moreover they substantially encourage other nations and regions to follow the same way. The European Commission and a number of Member States have developed adaptation strategies in order to help strengthen EU's resilience to the inevitable impacts of climate change. The EU has long been a driving force in international negotiations on climate change and was instrumental in the development of the UN Framework Convention on Climate Change. As the world's leading donor of development aid, the EU also provides substantial funding to help developing countries tackle climate change problem. Global warming influences human health, biodiversity, ecosystems but also many social and economic sectors. The aim of this paper is to focus on impact of claimant change on for food security. Food security challenges are directly related to globalization, climate change. It means that current and future food policy is exposed to all cross-cutting and that must be linked with environmental and climate targets, which supposed to be achieved. In the 7th EAP —The new general Union Environment Action Program to 2020, called “Living well, within the limits of our planet” EU has agreed to step up its efforts to protect natural capital, stimulate resource efficient, low carbon growth and innovation, and safeguard people’s health and wellbeing– while respecting the Earth’s natural limits.Keywords: climate change, food security, sustainable food consumption, climate governance
Procedia PDF Downloads 1805966 Study of a Lean Premixed Combustor: A Thermo Acoustic Analysis
Authors: Minoo Ghasemzadeh, Rouzbeh Riazi, Shidvash Vakilipour, Alireza Ramezani
Abstract:
In this study, thermo acoustic oscillations of a lean premixed combustor has been investigated, and a mono-dimensional code was developed in this regard. The linearized equations of motion are solved for perturbations with time dependence〖 e〗^iwt. Two flame models were considered in this paper and the effect of mean flow and boundary conditions were also investigated. After manipulation of flame heat release equation together with the equations of flow perturbation within the main components of the combustor model (i.e., plenum/ premixed duct/ and combustion chamber) and by considering proper boundary conditions between the components of model, a system of eight homogeneous equations can be obtained. This simplification, for the main components of the combustor model, is convenient since low frequency acoustic waves are not affected by bends. Moreover, some elements in the combustor are smaller than the wavelength of propagated acoustic perturbations. A convection time is also assumed to characterize the required time for the acoustic velocity fluctuations to travel from the point of injection to the location of flame front in the combustion chamber. The influence of an extended flame model on the acoustic frequencies of combustor was also investigated, assuming the effect of flame speed as a function of equivalence ratio perturbation, on the rate of flame heat release. The abovementioned system of equations has a related eigenvalue equation which has complex roots. The sign of imaginary part of these roots determines whether the disturbances grow or decay and the real part of these roots would give the frequency of the modes. The results show a reasonable agreement between the predicted values of dominant frequencies in the present model and those calculated in previous related studies.Keywords: combustion instability, dominant frequencies, flame speed, premixed combustor
Procedia PDF Downloads 3795965 Educators’ Adherence to Learning Theories and Their Perceptions on the Advantages and Disadvantages of E-Learning
Authors: Samson T. Obafemi, Seraphin D. Eyono-Obono
Abstract:
Information and Communication Technologies (ICTs) are pervasive nowadays, including in education where they are expected to improve the performance of learners. However, the hope placed in ICTs to find viable solutions to the problem of poor academic performance in schools in the developing world has not yet yielded the expected benefits. This problem serves as a motivation to this study whose aim is to examine the perceptions of educators on the advantages and disadvantages of e-learning. This aim will be subdivided into two types of research objectives. Objectives on the identification and design of theories and models will be achieved using content analysis and literature review. However, the objective on the empirical testing of such theories and models will be achieved through the survey of educators from different schools in the Pinetown District of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyse the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after assessing the validity and the reliability of the data. The main hypothesis driving this study is that there is a relationship between the demographics of educators’ and their adherence to learning theories on one side, and their perceptions on the advantages and disadvantages of e-learning on the other side, as argued by existing research; but this research views these learning theories under three perspectives: educators’ adherence to self-regulated learning, to constructivism, and to progressivism. This hypothesis was fully confirmed by the empirical study except for the demographic factor where teachers’ level of education was found to be the only demographic factor affecting the perceptions of educators on the advantages and disadvantages of e-learning.Keywords: academic performance, e-learning, learning theories, teaching and learning
Procedia PDF Downloads 2735964 Outbreak of Cholera, Jalgaon District, Maharastra, 2013
Authors: Yogita Tulsian, A. Yadav
Abstract:
Background: India reports 3,600 cholera cases annually. In August 2013, a cholera outbreak was reported in Jalgaon district, Maharashtra state. We sought to describe the epidemiological characteristics,identify risk factors, and recommend control measures. Methods: We collected existing stool and water testing laboratory results, and conducted a1: 1 matched case-control study. A cholera case was defined as a resident of Vishnapur or Malapur villagewith onset of acute watery diarrhea on/ after 1-July-2013. Controls were matched by age, gender and village and had not experienced any diarrhea for 3 months. We collected socio-demographic characteristics, clinical presentation, and food/travel/water exposure history and conducted conditional logistic regression. Results: Of 50 people who met the cholera case definition, 40 (80%) were from Vishnapur village and 30 (60%) were female. The median age was 8.5 years (range; 0.3-75). Twenty (45%) cases were hospitalized, twelve (60%) with severe dehydration. Three of five stool samples revealed Vibrio cholerae 01 El Tor, Ogawa and samples from 7 of 14 Vishnapur water sources contained fecal coliforms. Cases from Vishnapur were significantly more likely to drink from identified contaminated water sources (matched odds ratio (MOR) 3.5; 95% confidence interval (CI): 1-13), or from a river/canal (MOR=18.4;95%CI: 2-504). Cases from Malapur were more likely to drink from a river/canal (MOR=6.2; 95%CI: 0.6-196). Cases from both villages were significantly more likely to visit the forest (MOR 6.3; 95%CI: 2-30) or another village (MOR 3.5; 95%CI; 0.9-17). Conclusions: This outbreak was caused by Vibrio cholerae, likely through contamination of water in Vishnapur village and/or through drinking river/canal water. We recommended safe drinking water for forest visitors and all residents of these villages and use of regular water testing.Keywords: cholera, case control study, contaminated water, river
Procedia PDF Downloads 3615963 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots
Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov
Abstract:
This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.Keywords: autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem
Procedia PDF Downloads 1665962 Employing a System of Systems Approach in the Maritime RobotX Challenge: Incorporating Information Technology Students in the Development of an Autonomous Catamaran
Authors: Adam Jenkins
Abstract:
The Maritime RobotX Challenge provides a platform for postgraduate students conducting research in autonomous robotic systems to participate in an international competition. Although targeted to postgraduate students, the problem domain lends itself to a wide range of different levels of student expertise. In 2022, undergraduate Information Technology students from the University of South Australia undertook the challenge, utilizing a System of the Systems approach to the project's architecture. Each student group produced an independent solution to an identified task, which was then implemented on a Single Board Computer (SBC). A Central Control System then engaged each solution when appropriate, allowing the encapsulated SBC systems to manage each task as it was encountered. This approach facilitated collaboration among the multiple independent student teams over an 18-month period, and the fundamental system-agnostic architecture allowed for both the variance in student solutions and the limitations caused by the global electronics shortage. By adopting this approach, Information Technology teams were able to work independently yet produce an effective solution, leveraging their expertise to develop and construct an autonomous catamaran capable of meeting the competition's demanding requirements while producing a high level of engagement. The System of Systems approach is recommended to other universities interested in competing at this level and engaging students in a real-world problem.Keywords: case study, robotics, education, programming, system of systems, multi-disciplinary collaboration
Procedia PDF Downloads 765961 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share
Procedia PDF Downloads 1885960 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)
Procedia PDF Downloads 3395959 Effect of Joule Heating on Chemically Reacting Micropolar Fluid Flow over Truncated Cone with Convective Boundary Condition Using Spectral Quasilinearization Method
Authors: Pradeepa Teegala, Ramreddy Chetteti
Abstract:
This work emphasizes the effects of heat generation/absorption and Joule heating on chemically reacting micropolar fluid flow over a truncated cone with convective boundary condition. For this complex fluid flow problem, the similarity solution does not exist and hence using non-similarity transformations, the governing fluid flow equations along with related boundary conditions are transformed into a set of non-dimensional partial differential equations. Several authors have applied the spectral quasi-linearization method to solve the ordinary differential equations, but here the resulting nonlinear partial differential equations are solved for non-similarity solution by using a recently developed method called the spectral quasi-linearization method (SQLM). Comparison with previously published work on special cases of the problem is performed and found to be in excellent agreement. The influence of pertinent parameters namely Biot number, Joule heating, heat generation/absorption, chemical reaction, micropolar and magnetic field on physical quantities of the flow are displayed through graphs and the salient features are explored in detail. Further, the results are analyzed by comparing with two special cases, namely, vertical plate and full cone wherever possible.Keywords: chemical reaction, convective boundary condition, joule heating, micropolar fluid, spectral quasilinearization method
Procedia PDF Downloads 3465958 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 1475957 Comparison of Elastic and Viscoelastic Modeling for Asphalt Concrete Surface Layer
Authors: Fouzieh Rouzmehr, Mehdi Mousavi
Abstract:
Hot mix asphalt concrete (HMAC) is a mixture of aggregates and bitumen. The primary ingredient that determines the mechanical properties of HMAC is the bitumen in it, which displays viscoelastic behavior under normal service conditions. For simplicity, asphalt concrete is considered an elastic material, but this is far from reality at high service temperatures and longer loading times. Viscoelasticity means that the material's stress-strain relationship depends on the strain rate and loading duration. The goal of this paper is to simulate the mechanical response of flexible pavements using linear elastic and viscoelastic modeling of asphalt concrete and predict pavement performance. Falling Weight Deflectometer (FWD) load will be simulated and the results for elastic and viscoelastic modeling will be evaluated. The viscoelastic simulation is performed by the Prony series, which will be modeled by using ANSYS software. Inflexible pavement design, tensile strain at the bottom of the surface layer and compressive strain at the top of the last layer plays an important role in the structural response of the pavement and they will imply the number of loads for fatigue (Nf) and rutting (Nd) respectively. The differences of these two modelings are investigated on fatigue cracking and rutting problem, which are the two main design parameters in flexible pavement design. Although the differences in rutting problem between the two models were negligible, in fatigue cracking, the viscoelastic model results were more accurate. Results indicate that modeling the flexible pavement with elastic material is efficient enough and gives acceptable results.Keywords: flexible pavement, asphalt, FEM, viscoelastic, elastic, ANSYS, modeling
Procedia PDF Downloads 1315956 Structure of Consciousness According to Deep Systemic Constellations
Authors: Dmitry Ustinov, Olga Lobareva
Abstract:
The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people.Keywords: constellations, spiritual psychology, structure of consciousness, transpersonal psychology
Procedia PDF Downloads 2495955 Vascularized Adipose Tissue Engineering by Using Adipose ECM/Fibroin Hydrogel
Authors: Alisan Kayabolen, Dilek Keskin, Ferit Avcu, Andac Aykan, Fatih Zor, Aysen Tezcaner
Abstract:
Adipose tissue engineering is a promising field for regeneration of soft tissue defects. However, only very thin implants can be used in vivo since vascularization is still a problem for thick implants. Another problem is finding a biocompatible scaffold with good mechanical properties. In this study, the aim is to develop a thick vascularized adipose tissue that will integrate with the host, and perform its in vitro and in vivo characterizations. For this purpose, a hydrogel of decellularized adipose tissue (DAT) and fibroin was produced, and both endothelial cells and adipocytes that were differentiated from adipose derived stem cells were encapsulated in this hydrogel. Mixing DAT with fibroin allowed rapid gel formation by vortexing. It also provided to adjust mechanical strength by changing fibroin to DAT ratio. Based on compression tests, gels of DAT/fibroin ratio with similar mechanical properties to adipose tissue was selected for cell culture experiments. In vitro characterizations showed that DAT is not cytotoxic; on the contrary, it has many natural ECM components which provide biocompatibility and bioactivity. Subcutaneous implantation of hydrogels resulted with no immunogenic reaction or infection. Moreover, localized empty hydrogels gelled successfully around host vessel with required shape. Implantations of cell encapsulated hydrogels and histological analyses are under study. It is expected that endothelial cells inside the hydrogel will form a capillary network and they will bind to the host vessel passing through hydrogel.Keywords: adipose tissue engineering, decellularization, encapsulation, hydrogel, vascularization
Procedia PDF Downloads 5285954 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 805953 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network
Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala
Abstract:
There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.Keywords: artificial neural network, high performance concrete, rebound hammer, strength prediction
Procedia PDF Downloads 155