Search results for: stefan problem
5889 Cognitive and Environmental Factors Affecting Graduate Student Perception of Mathematics
Authors: Juanita Morris
Abstract:
The purpose of this study will examine the mediating relationships between the theories of intelligence, mathematics anxiety, gender stereotype threat, meta-cognition and math performance through the use of eye tracking technology, affecting student perception and problem-solving abilities. The participants will consist of (N=80) female graduate students. Test administered were the Abbreviated Math Anxiety Scale, Tobii Eye Tracking software, gender stereotype threat through Google images, and they will be asked to describe their problem-solving approach allowed to measure metacognition. Participants will be administered mathematics problems while having gender stereotype threat shown to them through online images while being directed to look at the eye tracking software Tobii. We will explore this by asking ‘Is mathematics anxiety associated with the theories of intelligence and gender stereotype threat and how does metacognition and math performance place a role in mediating those perspectives?’. It is hypothesized that math-anxious students are more likely affected by the gender stereotype threat and that may play a role in their performance? Furthermore, we also want to explore whether math anxious students are more likely to be an entity theorist than incremental theorist and whether those who are math anxious will be more likely to be fixated on variables associated with coefficients? Path analysis and independent samples t-test will be used to generate results for this study. We hope to conclude that both the theories of intelligence and metacognition mediate the relationship between mathematics anxiety and gender stereotype threat.Keywords: math anxiety, emotions, affective domains fo learning, cognitive underlinings
Procedia PDF Downloads 2695888 Neural Graph Matching for Modification Similarity Applied to Electronic Document Comparison
Authors: Po-Fang Hsu, Chiching Wei
Abstract:
In this paper, we present a novel neural graph matching approach applied to document comparison. Document comparison is a common task in the legal and financial industries. In some cases, the most important differences may be the addition or omission of words, sentences, clauses, or paragraphs. However, it is a challenging task without recording or tracing the whole edited process. Under many temporal uncertainties, we explore the potentiality of our approach to proximate the accurate comparison to make sure which element blocks have a relation of edition with others. In the beginning, we apply a document layout analysis that combines traditional and modern technics to segment layouts in blocks of various types appropriately. Then we transform this issue into a problem of layout graph matching with textual awareness. Regarding graph matching, it is a long-studied problem with a broad range of applications. However, different from previous works focusing on visual images or structural layout, we also bring textual features into our model for adapting this domain. Specifically, based on the electronic document, we introduce an encoder to deal with the visual presentation decoding from PDF. Additionally, because the modifications can cause the inconsistency of document layout analysis between modified documents and the blocks can be merged and split, Sinkhorn divergence is adopted in our neural graph approach, which tries to overcome both these issues with many-to-many block matching. We demonstrate this on two categories of layouts, as follows., legal agreement and scientific articles, collected from our real-case datasets.Keywords: document comparison, graph matching, graph neural network, modification similarity, multi-modal
Procedia PDF Downloads 1795887 Industrial Wastewater Sludge Treatment in Chongqing, China
Authors: Victor Emery David Jr., Jiang Wenchao, Yasinta John, Md. Sahadat Hossain
Abstract:
Sludge originates from the process of treatment of wastewater. It is the byproduct of wastewater treatment containing concentrated heavy metals and poorly biodegradable trace organic compounds, as well as potentially pathogenic organisms (viruses, bacteria, etc.) which are usually difficult to treat or dispose of. China, like other countries, is no stranger to the challenges posed by an increase of wastewater. Treatment and disposal of sludge have been a problem for most cities in China. However, this problem has been exacerbated by other issues such as lack of technology, funding, and other factors. Suitable methods for such climatic conditions are still unavailable for modern cities in China. Against this background, this paper seeks to describe the methods used for treatment and disposal of sludge from industries and suggest a suitable method for treatment and disposal in Chongqing/China. From the research conducted, it was discovered that the highest treatment rate of sludge in Chongqing was 10.08%. The industrial waste piping system is not separated from the domestic system. Considering the proliferation of industry and urbanization, there is a likelihood that the production of sludge in Chongqing will increase. If the sludge produced is not properly managed, this may lead to adverse health and environmental effects. Disposal costs and methods for Chongqing were also included in this paper’s analysis. Research showed that incineration is the most expensive method of sludge disposal in China/Chongqing. Subsequent research, therefore, considered optional alternatives such as composting. Composting represents a relatively cheap waste disposal method considering the vast population, current technology and economic conditions of Chongqing, as well as China at large.Keywords: Chongqing/China, disposal, industrial, sludge, treatment
Procedia PDF Downloads 3215886 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 3965885 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 1685884 The Problem of Suffering: Job, The Servant and Prophet of God
Authors: Barbara Pemberton
Abstract:
Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.Keywords: suffering, Job, Qur'an, tanakh
Procedia PDF Downloads 1865883 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 2565882 The Development of Directed-Project Based Learning as Language Learning Model to Improve Students' English Achievement
Authors: Tri Pratiwi, Sufyarma Marsidin, Hermawati Syarif, Yahya
Abstract:
The 21st-century skills being highly promoted today are Creativity and Innovation, Critical Thinking and Problem Solving, Communication and Collaboration. Communication Skill is one of the essential skills that should be mastered by the students. To master Communication Skills, students must first master their Language Skills. Language Skills is one of the main supporting factors in improving Communication Skills of a person because by learning Language Skills students are considered capable of communicating well and correctly so that the message or how to deliver the message to the listener can be conveyed clearly and easily understood. However, it cannot be denied that English output or learning outcomes which are less optimal is the problem which is frequently found in the implementation of the learning process. This research aimed to improve students’ language skills by developing learning model in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL) implementation. This study is designed in Research and Development (R & D) using ADDIE model development. The researcher collected data through observation, questionnaire, interview, test, and documentation which were then analyzed qualitatively and quantitatively. The results showed that DPjBL is effective to use, it is seen from the difference in value between the pretest and posttest of the control class and the experimental class. From the results of a questionnaire filled in general, the students and teachers agreed to DPjBL learning model. This learning model can increase the students' English achievement.Keywords: language skills, learning model, Directed-Project Based Learning (DPjBL), English achievement
Procedia PDF Downloads 1655881 Nutritional Status of Middle School Students and Their Selected Eating Behaviours
Authors: K. Larysz, E. Grochowska-Niedworok, M. Kardas, K. Brukalo, B. Calyniuk, R. Polaniak
Abstract:
Eating behaviours and habits are one of the main factors affecting health. Abnormal nutritional status is a growing problem related to nutritional errors. The number of adolescents presenting excess body weight is also rising. The body's demand for all nutrients increases in the period of intensive development, i.e., during puberty. A varied, well-balanced diet and elimination of unhealthy habits are two of the key factors that contribute to the proper development of a young body. The aim of the study was to assess the nutritional status and selected eating behaviours/habits in adolescents attending middle school. An original questionnaire including 24 questions was conducted. A total of 401 correctly completed questionnaires were qualified for the assessment. Body mass index (BMI) was calculated. Furthermore, the frequency of breakfast consumption, the number of meals per day, types of snacks and sweetened beverages, as well as the frequency of consuming fruit and vegetables, dairy products and fast-foods were assessed. The obtained results were analysed statistically. The study showed that malnutrition was more of a problem than overweight or obesity among middle school students. More than 71% of middle school students have breakfast, whereas almost 30% of adolescents skip this meal. Up to 57.6% of respondents most often consume sweets at school. A total of 37% of adolescents consume sweetened beverages daily or almost every day. Most of the respondents consume an optimal number of meals daily. Only 24.7% of respondents consume fruit and vegetables more than once daily. The majority of respondents (49.40%) declared that they consumed fast food several times a month. Satisfactory frequency of consuming dairy products was reported by 32.7% of middle school students. Conclusions of our study: 1. Malnutrition is more of a problem than overweight or obesity among middle school students. They consume excessive amounts of sweets, sweetened beverages, and fast foods. 2. The consumption of fruit and vegetables was too low in the study group. The intake of dairy products was also low in some cases. 3. A statistically significant correlation was found between the frequency of fast food consumption and the intake of sweetened beverages. A low correlation was found between nutritional status and the number of meals per day. The number of meals consumed by these individuals decreased with increasing nutritional status.Keywords: adolescent, malnutrition, nutrition, nutritional status, obesity
Procedia PDF Downloads 1355880 The Effects of Science, Technology, Engineering and Math Problem-Based Learning on Native Hawaiians and Other Underrepresented, Low-Income, Potential First-Generation High School Students
Authors: Nahid Nariman
Abstract:
The prosperity of any nation depends on its ability to use human potential, in particular, to offer an education that builds learners' competencies to become effective workforce participants and true citizens of the world. Ever since the Second World War, the United States has been a dominant player in the world politically, economically, socially, and culturally. The rapid rise of technological advancement and consumer technologies have made it clear that science, technology, engineering, and math (STEM) play a crucial role in today’s world economy. Exploring the top qualities demanded from new hires in the industry—i.e., problem-solving skills, teamwork, dependability, adaptability, technical and communication skills— sheds light on the kind of path that is needed for a successful educational system to effectively support STEM. The focus of 21st century education has been to build student competencies by preparing them to acquire and apply knowledge, to think critically and creatively, to competently use information, be able to work in teams, to demonstrate intellectual and moral values as well as cultural awareness, and to be able to communicate. Many educational reforms pinpoint various 'ideal' pathways toward STEM that educators, policy makers, and business leaders have identified for educating the workforce of tomorrow. This study will explore how problem-based learning (PBL), an instructional strategy developed in the medical field and adopted with many successful results in K-12 through higher education, is the proper approach to stimulate underrepresented high school students' interest in pursuing STEM careers. In the current study, the effect of a problem-based STEM model on students' attitudes and career interests was investigated using qualitative and quantitative methods. The participants were 71 low-income, native Hawaiian high school students who would be first-generation college students. They were attending a summer STEM camp developed as the result of a collaboration between the University of Hawaii and the Upward Bound Program. The project, funded by the National Science Foundation's Innovative Technology Experiences for Students and Teachers (ITEST) program, used PBL as an approach in challenging students to engage in solving hands-on, real-world problems in their communities. Pre-surveys were used before camp and post-surveys on the last day of the program to learn about the implementation of the PBL STEM model. A Career Interest Questionnaire provided a way to investigate students’ career interests. After the summer camp, a representative selection of students participated in focus group interviews to discuss their opinions about the PBL STEM camp. The findings revealed a significantly positive increase in students' attitudes towards STEM disciplines and STEM careers. The students' interview results also revealed that students identified PBL to be an effective form of instruction in their learning and in the development of their 21st-century skills. PBL was acknowledged for making the class more enjoyable and for raising students' interest in STEM careers, while also helping them develop teamwork and communication skills in addition to scientific knowledge. As a result, the integration of PBL and a STEM learning experience was shown to positively affect students’ interest in STEM careers.Keywords: problem-based learning, science education, STEM, underrepresented students
Procedia PDF Downloads 1245879 A Collective Intelligence Approach to Safe Artificial General Intelligence
Authors: Craig A. Kaplan
Abstract:
If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety
Procedia PDF Downloads 925878 Attitude-Behavior Consistency: A Descriptive Study in the Context of Climate Change and Acceptance of Psychological Findings by the Public
Authors: Nita Mitra, Pranab Chanda
Abstract:
In this paper, the issue of attitude-behavior consistency has been addressed in the context of climate change. Scientists (about 98 percent) opine that human behavior has a significant role in climate change. Such climate changes are harmful for human life. Thus, it is natural to conclude that only change of human behavior can avoid harmful consequences. Government and Non-Government Organizations are taking steps to bring in the desired changes in behavior. However, it seems that although the efforts are achieving changes in the attitudes to some degree, those steps are failing to materialize the corresponding behavioral changes. This has been a great concern for environmentalists. Psychologists have noticed the problem as a particular case of the general psychological problem of making attitude and behavior consistent with each other. The present study is in continuation of a previous work of the same author based upon descriptive research on the status of attitude and behavior of the people of a foot-hill region of the Himalayas in India regarding climate change. The observations confirm the mismatch of attitude and behavior of the people of the region with respect to climate change. While doing so an attitude-behavior mismatch has been noticed with respect to the acceptance of psychological findings by the public. People have been found to be interested in Psychology as an important subject, but they are reluctant to take the observations of psychologists seriously. A comparative study in this regard has been made with similar studies done elsewhere. Finally, an attempt has been made to perceive observations in the framework of observational learning due to Bandura's and behavior change due to Lewin.Keywords: acceptance of psychological variables, attitude-behavior consistency, behavior change, climate change, observational learning
Procedia PDF Downloads 1575877 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 925876 Climate Change: A Critical Analysis on the Relationship between Science and Policy
Authors: Paraskevi Liosatou
Abstract:
Climate change is considered to be of global concern being amplified by the fact that by its nature, cannot be spatially limited. This fact makes necessary the intergovernmental decision-making procedures. In the intergovernmental level, the institutions such as the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change develop efforts, methods, and practices in order to plan and suggest climate mitigation and adaptation measures. These measures are based on specific scientific findings and methods making clear the strong connection between science and policy. In particular, these scientific recommendations offer a series of practices, methods, and choices mitigating the problem by aiming at the indirect mitigation of the causes and the factors amplifying climate change. Moreover, modern production and economic context do not take into consideration the social, political, environmental and spatial dimensions of the problem. This work studies the decision-making process working in international and European level. In this context, this work considers the policy tools that have been implemented by various intergovernmental organizations. The methodology followed is based mainly on the critical study of standards and process concerning the connections and cooperation between science and policy as well as considering the skeptic debates developed. The finding of this work focuses on the links between science and policy developed by the institutional and scientific mechanisms concerning climate change mitigation. It also analyses the dimensions and the factors of the science-policy framework; in this way, it points out the causes that maintain skepticism in current scientific circles.Keywords: climate change, climate change mitigation, climate change skepticism, IPCC, skepticism
Procedia PDF Downloads 1365875 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 1405874 Climate Change and Food Security: The Legal Aspects with Special Focus on the European Union
Authors: M. Adamczak-Retecka, O. Hołub-Śniadach
Abstract:
Dangerous of climate change is now global problem and as such has a strategic priority also for the European Union. Europe and European citizens try to do their best to cut greenhouse gas emissions, moreover they substantially encourage other nations and regions to follow the same way. The European Commission and a number of Member States have developed adaptation strategies in order to help strengthen EU's resilience to the inevitable impacts of climate change. The EU has long been a driving force in international negotiations on climate change and was instrumental in the development of the UN Framework Convention on Climate Change. As the world's leading donor of development aid, the EU also provides substantial funding to help developing countries tackle climate change problem. Global warming influences human health, biodiversity, ecosystems but also many social and economic sectors. The aim of this paper is to focus on impact of claimant change on for food security. Food security challenges are directly related to globalization, climate change. It means that current and future food policy is exposed to all cross-cutting and that must be linked with environmental and climate targets, which supposed to be achieved. In the 7th EAP —The new general Union Environment Action Program to 2020, called “Living well, within the limits of our planet” EU has agreed to step up its efforts to protect natural capital, stimulate resource efficient, low carbon growth and innovation, and safeguard people’s health and wellbeing– while respecting the Earth’s natural limits.Keywords: climate change, food security, sustainable food consumption, climate governance
Procedia PDF Downloads 1805873 Educators’ Adherence to Learning Theories and Their Perceptions on the Advantages and Disadvantages of E-Learning
Authors: Samson T. Obafemi, Seraphin D. Eyono-Obono
Abstract:
Information and Communication Technologies (ICTs) are pervasive nowadays, including in education where they are expected to improve the performance of learners. However, the hope placed in ICTs to find viable solutions to the problem of poor academic performance in schools in the developing world has not yet yielded the expected benefits. This problem serves as a motivation to this study whose aim is to examine the perceptions of educators on the advantages and disadvantages of e-learning. This aim will be subdivided into two types of research objectives. Objectives on the identification and design of theories and models will be achieved using content analysis and literature review. However, the objective on the empirical testing of such theories and models will be achieved through the survey of educators from different schools in the Pinetown District of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyse the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after assessing the validity and the reliability of the data. The main hypothesis driving this study is that there is a relationship between the demographics of educators’ and their adherence to learning theories on one side, and their perceptions on the advantages and disadvantages of e-learning on the other side, as argued by existing research; but this research views these learning theories under three perspectives: educators’ adherence to self-regulated learning, to constructivism, and to progressivism. This hypothesis was fully confirmed by the empirical study except for the demographic factor where teachers’ level of education was found to be the only demographic factor affecting the perceptions of educators on the advantages and disadvantages of e-learning.Keywords: academic performance, e-learning, learning theories, teaching and learning
Procedia PDF Downloads 2735872 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots
Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov
Abstract:
This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.Keywords: autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem
Procedia PDF Downloads 1665871 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 2815870 Employing a System of Systems Approach in the Maritime RobotX Challenge: Incorporating Information Technology Students in the Development of an Autonomous Catamaran
Authors: Adam Jenkins
Abstract:
The Maritime RobotX Challenge provides a platform for postgraduate students conducting research in autonomous robotic systems to participate in an international competition. Although targeted to postgraduate students, the problem domain lends itself to a wide range of different levels of student expertise. In 2022, undergraduate Information Technology students from the University of South Australia undertook the challenge, utilizing a System of the Systems approach to the project's architecture. Each student group produced an independent solution to an identified task, which was then implemented on a Single Board Computer (SBC). A Central Control System then engaged each solution when appropriate, allowing the encapsulated SBC systems to manage each task as it was encountered. This approach facilitated collaboration among the multiple independent student teams over an 18-month period, and the fundamental system-agnostic architecture allowed for both the variance in student solutions and the limitations caused by the global electronics shortage. By adopting this approach, Information Technology teams were able to work independently yet produce an effective solution, leveraging their expertise to develop and construct an autonomous catamaran capable of meeting the competition's demanding requirements while producing a high level of engagement. The System of Systems approach is recommended to other universities interested in competing at this level and engaging students in a real-world problem.Keywords: case study, robotics, education, programming, system of systems, multi-disciplinary collaboration
Procedia PDF Downloads 765869 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share
Procedia PDF Downloads 1885868 Effect of Joule Heating on Chemically Reacting Micropolar Fluid Flow over Truncated Cone with Convective Boundary Condition Using Spectral Quasilinearization Method
Authors: Pradeepa Teegala, Ramreddy Chetteti
Abstract:
This work emphasizes the effects of heat generation/absorption and Joule heating on chemically reacting micropolar fluid flow over a truncated cone with convective boundary condition. For this complex fluid flow problem, the similarity solution does not exist and hence using non-similarity transformations, the governing fluid flow equations along with related boundary conditions are transformed into a set of non-dimensional partial differential equations. Several authors have applied the spectral quasi-linearization method to solve the ordinary differential equations, but here the resulting nonlinear partial differential equations are solved for non-similarity solution by using a recently developed method called the spectral quasi-linearization method (SQLM). Comparison with previously published work on special cases of the problem is performed and found to be in excellent agreement. The influence of pertinent parameters namely Biot number, Joule heating, heat generation/absorption, chemical reaction, micropolar and magnetic field on physical quantities of the flow are displayed through graphs and the salient features are explored in detail. Further, the results are analyzed by comparing with two special cases, namely, vertical plate and full cone wherever possible.Keywords: chemical reaction, convective boundary condition, joule heating, micropolar fluid, spectral quasilinearization method
Procedia PDF Downloads 3465867 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 1475866 Comparison of Elastic and Viscoelastic Modeling for Asphalt Concrete Surface Layer
Authors: Fouzieh Rouzmehr, Mehdi Mousavi
Abstract:
Hot mix asphalt concrete (HMAC) is a mixture of aggregates and bitumen. The primary ingredient that determines the mechanical properties of HMAC is the bitumen in it, which displays viscoelastic behavior under normal service conditions. For simplicity, asphalt concrete is considered an elastic material, but this is far from reality at high service temperatures and longer loading times. Viscoelasticity means that the material's stress-strain relationship depends on the strain rate and loading duration. The goal of this paper is to simulate the mechanical response of flexible pavements using linear elastic and viscoelastic modeling of asphalt concrete and predict pavement performance. Falling Weight Deflectometer (FWD) load will be simulated and the results for elastic and viscoelastic modeling will be evaluated. The viscoelastic simulation is performed by the Prony series, which will be modeled by using ANSYS software. Inflexible pavement design, tensile strain at the bottom of the surface layer and compressive strain at the top of the last layer plays an important role in the structural response of the pavement and they will imply the number of loads for fatigue (Nf) and rutting (Nd) respectively. The differences of these two modelings are investigated on fatigue cracking and rutting problem, which are the two main design parameters in flexible pavement design. Although the differences in rutting problem between the two models were negligible, in fatigue cracking, the viscoelastic model results were more accurate. Results indicate that modeling the flexible pavement with elastic material is efficient enough and gives acceptable results.Keywords: flexible pavement, asphalt, FEM, viscoelastic, elastic, ANSYS, modeling
Procedia PDF Downloads 1315865 Structure of Consciousness According to Deep Systemic Constellations
Authors: Dmitry Ustinov, Olga Lobareva
Abstract:
The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people.Keywords: constellations, spiritual psychology, structure of consciousness, transpersonal psychology
Procedia PDF Downloads 2495864 Vascularized Adipose Tissue Engineering by Using Adipose ECM/Fibroin Hydrogel
Authors: Alisan Kayabolen, Dilek Keskin, Ferit Avcu, Andac Aykan, Fatih Zor, Aysen Tezcaner
Abstract:
Adipose tissue engineering is a promising field for regeneration of soft tissue defects. However, only very thin implants can be used in vivo since vascularization is still a problem for thick implants. Another problem is finding a biocompatible scaffold with good mechanical properties. In this study, the aim is to develop a thick vascularized adipose tissue that will integrate with the host, and perform its in vitro and in vivo characterizations. For this purpose, a hydrogel of decellularized adipose tissue (DAT) and fibroin was produced, and both endothelial cells and adipocytes that were differentiated from adipose derived stem cells were encapsulated in this hydrogel. Mixing DAT with fibroin allowed rapid gel formation by vortexing. It also provided to adjust mechanical strength by changing fibroin to DAT ratio. Based on compression tests, gels of DAT/fibroin ratio with similar mechanical properties to adipose tissue was selected for cell culture experiments. In vitro characterizations showed that DAT is not cytotoxic; on the contrary, it has many natural ECM components which provide biocompatibility and bioactivity. Subcutaneous implantation of hydrogels resulted with no immunogenic reaction or infection. Moreover, localized empty hydrogels gelled successfully around host vessel with required shape. Implantations of cell encapsulated hydrogels and histological analyses are under study. It is expected that endothelial cells inside the hydrogel will form a capillary network and they will bind to the host vessel passing through hydrogel.Keywords: adipose tissue engineering, decellularization, encapsulation, hydrogel, vascularization
Procedia PDF Downloads 5285863 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 805862 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network
Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala
Abstract:
There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.Keywords: artificial neural network, high performance concrete, rebound hammer, strength prediction
Procedia PDF Downloads 1555861 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.Keywords: food, plastic, microplastic, NIR hyperspectral imaging, unmixing
Procedia PDF Downloads 1305860 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study
Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji
Abstract:
Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.Keywords: phase change materials, air-PCM exchangers, convection, conduction
Procedia PDF Downloads 178