Search results for: proposed module
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9458

Search results for: proposed module

1118 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 317
1117 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model

Authors: Jiachen Wang, Dongxu Ji

Abstract:

Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.

Keywords: geothermal power generation, optimization, energy model, thermodynamics

Procedia PDF Downloads 65
1116 Phenolic Rich Dry Extracts and Their Antioxidant Activity

Authors: R. Raudonis, L. Raudonė, V. Janulis, P. Viškelis

Abstract:

Pharmacological and clinical studies demonstrated that phenolic compounds particularly flavonoids and phenolic acids are responsible for a wide spectrum of therapeutic activities. Flavonoids and phenolic acids are regarded as natural antioxidants that play an important role in protecting cells from oxidative stress. Qualitatively prepared dry extracts possess high stability and concentration of bio active compounds, facility of standardization and quality control. The aim of this work was to determine the phenolic and antioxidant profiles of Hippophaë rhamnoides L., Betula pendula Roth., Tilia cordata Mill., Sorbus aucuparia L. leaves dry extracts and to identify markers of antioxidant activity. Extracts were analyzed using high-performance liquid chromatography (HPLC) with FRAP post-column assay. Dry extracts are versatile forms possessing wide area of applications, final product ensure consistent phytochemical and functional properties. Seven flavonoids: rutin, hyperoside, isorhamnetin 3-O-rutinoside, isorhamnetin 3-O-glucoside, quercetin, kaempferol, isorhamnetin were identified in dry extract of Hippophaë rhamnoides L. leaves. Predominant compounds were flavonol glycosides which were chosen as markers for quantitative control of dry extracts. Chlorogenic acid, hyperoside, rutin, quercetin, isorhamnetin were prevailing compounds in Betula pendula Roth. leaves extract, whereas strongest ferric reducing activity was determined for chlorogenic acid and hyperoside. Notable amounts of protocatechuic acid and flavonol glycosides, rutin, hyperoside, quercitrin, isoquercitrin were identified in the chromatographic profile of Tilia cordata Mill. Neochlorogenic and chlorogenic acids were significantly dominant compounds in antioxidant profile in dry extract of Sorbus aucuparia L. leaves. Predominant compounds of antioxidant profiles could be proposed as functional markers of quality of phenolic rich raw materials. Dry extracts could be further used for manufacturing of pharmaceutical and nutraceuticals.

Keywords: dry extract, FRAP, antioxidant activity, phenolic

Procedia PDF Downloads 506
1115 Prevalence and Clinical Significance of Antiphospholipid Antibodies in COVID-19 Patients Admitted to Intensive Care Units

Authors: Mostafa Najim, Alaa Rahhal, Fadi Khir, Safae Abu Yousef, Amer Aljundi, Feryal Ibrahim, Aliaa Amer, Ahmed Soliman Mohamed, Samira Saleh, Dekra Alfaridi, Ahmed Mahfouz, Sumaya Al-Yafei, Faraj Howady, Mohamad Yahya Khatib, Samar Alemadi

Abstract:

Background: Coronavirus disease 2019 (COVID-19) increases the risk of coagulopathy among critically ill patients. Although the presence of antiphospholipid antibodies (aPLs) has been proposed as a possible mechanism of COVID-19 induced coagulopathy, their clinical significance among critically ill patients with COVID-19 remains uncertain. Methods: This prospective observational study included patients with COVID-19 admitted to intensive care units (ICU) to evaluate the prevalence and clinical significance of aPLs, including anticardiolipin IgG/IgM, anti-β2-glycoprotein IgG/IgM, and lupus anticoagulant. The study outcomes included the prevalence of aPLs, a primary composite outcome of all-cause mortality, and arterial or venous thrombosis among aPLs positive patients versus aPLs negative patients during their ICU stay. Multiple logistic regression was used to assess the influence of aPLs on the primary composite outcome of mortality and thrombosis. Results: A total of 60 critically ill patients were enrolled. Of whom, 57 (95%) were male, with a mean age of 52.8 ± 12.2 years, and the majority were from Asia (68%). Twenty-two patients (37%) were found to have positive aPLs; of whom 21 patients were positive for lupus anticoagulant, whereas one patient was positive for anti-β2-glycoprotein IgG/IgM. The composite outcome of mortality and thrombosis during ICU did not differ among patients with positive aPLs compared to those with negative aPLs (4 (18%) vs. 6 (16%), aOR= 0.98, 95% CI 0.1-6.7; p-value= 0.986). Likewise, the secondary outcomes, including all-cause mortality, venous thrombosis, arterial thrombosis, discharge from ICU, time to mortality, and time to discharge from ICU, did not differ between those with positive aPLs upon ICU admission in comparison to patients with negative aPLs. Conclusion: The presence of aPLs does not seem to affect the outcomes of critically ill patients with COVID-19 in terms of all-cause mortality and thrombosis. Therefore, clinicians may not screen critically ill patients with COVID-19 for aPLs unless deemed clinically appropriate.

Keywords: antiphospholipid antibodies, critically ill patients, coagulopathy, coronavirus

Procedia PDF Downloads 164
1114 A Study on Neighborhood of Dwelling with Historical-Islamic Architectural Elements

Authors: M.J. Seddighi, Moradchelleh, M. Keyvan

Abstract:

The ultimate goal in building a city is to provide pleasant, comfortable and nurturing environment as a context of public life. City environment establishes strong connection with people and their surrounding habitant, acting as relevance in social interactions between citizens itself. Urban environment and appropriate municipal facilities are the only way for proper communication between city and citizens and also citizens themselves.There is a need for complement elements between buildings and constructions to settling city life through which the move, comfort, reactions and anxiety will adjust and reflect the spirit to the city. In the surging development of society, urban’ spaces are encountered evolution, sometimes causing the symbols to fade and waste, and as a result, leading to destroy belongs among humans and their physical liquidate. Houses and living spaces exhibit materialistic reflection of life style. In the other words, way of life makes the symbolic essence of living spaces. In addition, it is of sociocultural factor of lifestyle, consisting the concepts and culture, morality, worldview, and national character. Culture is responsible for some crucial meaningful needs which can be wide because they depend on various causes such as perception and interpretation of believes, philosophy of life, interaction with neighbors and protection against climate and enemies. The bi-lateral relationship between human and nature is the main factor that needs to be properly addressed. It is because of the fact that the approach which is taken against landscape and nature has a pertinent influence on creation and shaping the structure of a house. The first response of human in tackling the environment is to build a “shelter” and place as dwelling. This has been a crucial factor in all time periods. In the proposed study, dwelling in Khorasgan’ Stream, as an area located in one of the important historical city of Iran, has been studied. Khorasgan’ Stream is the basic constituent elements of the present architectural form of Isfahan. The influence of Islamic spiritual culture and neighborhood with the historical elements on the dwelling of the selected location, subsequently on other regions of the town are presented.

Keywords: dwelling, neighborhood, historical, Islamic, architectural elements

Procedia PDF Downloads 411
1113 Temporality in Architecture and Related Knowledge

Authors: Gonca Z. Tuncbilek

Abstract:

Architectural research tends to define architecture in terms of its permanence. In this study, the term ‘temporality’ and its use in architectural discourse is re-visited. The definition, proposition, and efficacy of the temporality occur both in architecture and in its related knowledge. The temporary architecture not only fulfills the requirement of the architectural programs, but also plays a significant role in generating an environment of architectural discourse. In recent decades, there is a great interest on the temporary architectural practices regarding to the installations, exhibition spaces, pavilions, and expositions; inviting the architects to experience and think about architecture. The temporary architecture has a significant role among the architecture, the architect, and the architectural discourse. Experiencing the contemporary materials, methods and technique; they have proposed the possibilities of the future architecture. These structures give opportunities to the architects to a wide-ranging variety of freedoms to experience the ‘new’ in architecture. In addition to this experimentation, they can be considered as an agent to redefine and reform the boundaries of the architectural discipline itself. Although the definition of architecture is re-analyzed in terms of its temporality rather than its permanence; architecture, in reality, still relies on historically codified types and principles of the formation. The concept of type can be considered for several different sciences, and there is a tendency to organize and understand the world in terms of classification in many different cultures and places. ‘Type’ is used as a classification tool with/without the scope of the critical invention. This study considers theories of type, putting forward epistemological and discursive arguments related to the form of architecture, being related to historical and formal disciplinary knowledge in architecture. This study has been to emphasize the importance of the temporality in architecture as a creative tool to reveal the position within the architectural discourse. The temporary architecture offers ‘new’ opportunities in the architectural field to be analyzed. In brief, temporary structures allow the architect freedoms to the experimentation in architecture. While redefining the architecture in terms of temporality, architecture still relies on historically codified types (pavilions, exhibitions, expositions, and installations). The notion of architectural types and its varying interpretations are analyzed based on the texts of architectural theorists since the Age of Enlightenment. Investigating the classification of type in architecture particularly temporary architecture, it is necessary to return to the discussion of the origin of the knowledge and its classification.

Keywords: classification of architecture, exhibition design, pavilion design, temporary architecture

Procedia PDF Downloads 364
1112 Awareness about Authenticity of Health Care Information from Internet Sources among Health Care Students in Malaysia: A Teaching Hospital Study

Authors: Renjith George, Preethy Mary Donald

Abstract:

Use of internet sources to retrieve health care related information among health care professionals has increased tremendously as the accessibility to internet is made easier through smart phones and tablets. Though there are huge data available at a finger touch, it is doubtful whether all the sources providing health care information adhere to evidence based practice. The objective of this survey was to study the prevalence of use of internet sources to get health care information, to assess the mind-set towards the authenticity of health care information available via internet sources and to study the awareness about evidence based practice in health care among medical and dental students in Melaka-Manipal Medical College. The survey was proposed as there is limited number of studies reported in the literature and this is the first of its kind in Malaysia. A cross sectional survey was conducted among the medical and dental students of Melaka-Manipal Medical College. A total of 521 students including medical and dental students in their clinical years of undergraduate study participated in the survey. A questionnaire consisting of 14 questions were constructed based on data available from the published literature and focused group discussion and was pre-tested for validation. Data analysis was done using SPSS. The statistical analysis of the results of the survey proved that the use of internet resources for health care information are equally preferred over the conventional resources among health care students. Though majority of the participants verify the authenticity of information from internet sources, there was considerable percentage of candidates who feels that all the information from the internet can be utilised for clinical decision making or were not aware about the need of verification of authenticity of such information. 63.7 % of the participants rely on evidence based practice in health care for clinical decision making while 34.2 % were not aware about it. A minority of 2.1% did not agree with the concept of evidence based practice. The observations of the survey reveals the increasing use of internet resources for health care information among health care students. The results warrants the need to move towards evidence based practice in health care as all health care information available online may not be reliable. The health care person should be judicious while utilising the information from such resources for clinical decision making.

Keywords: authenticity, evidence based practice, health care information, internet

Procedia PDF Downloads 444
1111 SynKit: A Event-Driven and Scalable Microservices-Based Kitting System

Authors: Bruno Nascimento, Cristina Wanzeller, Jorge Silva, João A. Dias, André Barbosa, José Ribeiro

Abstract:

The increasing complexity of logistics operations stems from evolving business needs, such as the shift from mass production to mass customization, which demands greater efficiency and flexibility. In response, Industry 4.0 and 5.0 technologies provide improved solutions to enhance operational agility and better meet market demands. The management of kitting zones, combined with the use of Autonomous Mobile Robots, faces challenges related to coordination, resource optimization, and rapid response to customer demand fluctuations. Additionally, implementing lean manufacturing practices in this context must be carefully orchestrated by intelligent systems and human operators to maximize efficiency without sacrificing the agility required in an advanced production environment. This paper proposes and implements a microservices-based architecture integrating principles from Industry 4.0 and 5.0 with lean manufacturing practices. The architecture enhances communication and coordination between autonomous vehicles and kitting management systems, allowing more efficient resource utilization and increased scalability. The proposed architecture focuses on the modularity and flexibility of operations, enabling seamless flexibility to change demands and the efficient allocation of resources in realtime. Conducting this approach is expected to significantly improve logistics operations’ efficiency and scalability by reducing waste and optimizing resource use while improving responsiveness to demand changes. The implementation of this architecture provides a robust foundation for the continuous evolution of kitting management and process optimization. It is designed to adapt to dynamic environments marked by rapid shifts in production demands and real-time decision-making. It also ensures seamless integration with automated systems, aligning with Industry 4.0 and 5.0 needs while reinforcing Lean Manufacturing principles.

Keywords: microservices, event-driven, kitting, AMR, lean manufacturing, industry 4.0, industry 5.0

Procedia PDF Downloads 20
1110 Analysis of Waiting Time and Drivers Fatigue at Manual Toll Plaza and Suggestion of an Automated Toll Tax Collection System

Authors: Muhammad Dawood Idrees, Maria Hafeez, Arsalan Ansari

Abstract:

Toll tax collection is the earliest method of tax collection and revenue generation. This revenue is utilized for the development of roads networks, maintenance, and connecting to roads and highways across the country. Pakistan is one of the biggest countries, covers a wide area of land, roads networks, and motorways are important source of connecting cities. Every day millions of people use motorways, and they have to stop at toll plazas to pay toll tax as majority of toll plazas are manually collecting toll tax. The purpose of this study is to calculate the waiting time of vehicles at Karachi Hyderabad (M-9) motorway. As Karachi is the biggest city of Pakistan and hundreds of thousands of people use this route to approach other cities. Currently, toll tax collection is manual system which is a major cause for long time waiting at toll plaza. This study calculates the waiting time of vehicles, fuel consumed in waiting time, manpower employed at toll plaza as all process is manual, and it also leads to mental and physical fatigue of driver. All wastages of sources are also calculated, and a most feasible automatic toll tax collection system is proposed which is not only beneficial to reduce waiting time but also beneficial in reduction of fuel, reduction of manpower employed, and reduction in physical and mental fatigue. A cost comparison in terms of wastages is also shown between manual and automatic toll tax collection system (E-Z Pass). Results of this study reveal that, if automatic tool collection system is implemented at Karachi to Hyderabad motorway (M-9), there will be a significance reduction in waiting time of vehicles, which leads to reduction of fuel consumption, environmental pollution, mental and physical fatigue of driver. All these reductions are also calculated in terms of money (Pakistani rupees) and it is obtained that millions of rupees can be saved by using automatic tool collection system which will lead to improve the economy of country.

Keywords: toll tax collection, waiting time, wastages, driver fatigue

Procedia PDF Downloads 146
1109 Harnessing Environmental DNA to Assess the Environmental Sustainability of Commercial Shellfish Aquaculture in the Pacific Northwest United States

Authors: James Kralj

Abstract:

Commercial shellfish aquaculture makes significant contributions to the economy and culture of the Pacific Northwest United States. The industry faces intense pressure to minimize environmental impacts as a result of Federal policies like the Magnuson-Stevens Fisheries Conservation and Management Act and the Endangered Species Act. These policies demand the protection of essential fish habitat and declare several salmon species as endangered. Consequently, numerous projects related to the protection and rehabilitation of eelgrass beds, a crucial ecosystem for countless fish species, have been proposed at both state and federal levels. Both eelgrass beds and commercial shellfish farms occupy the same physical space, and therefore understanding the effects of shellfish aquaculture on eelgrass ecosystems has become a top ecological and economic priority of both government and industry. This study evaluates the organismal communities that eelgrass and oyster aquaculture habitats support. Water samples were collected from Willapa Bay, Washington; Tillamook Bay, Oregon; Humboldt Bay, California; and Sammish Bay, Washington to compare species diversity in eelgrass beds, oyster aquaculture plots, and boundary edges between these two habitats. Diversity was assessed using a novel technique: environmental DNA (eDNA). All organisms constantly shed small pieces of DNA into their surrounding environment through the loss of skin, hair, tissues, and waste. In the marine environment, this DNA becomes suspended in the water column allowing it to be easily collected. Once extracted and sequenced, this eDNA can be used to paint a picture of all the organisms that live in a particular habitat making it a powerful technology for environmental monitoring. Industry professionals and government officials should consider these findings to better inform future policies regulating eelgrass beds and oyster aquaculture. Furthermore, the information collected in this study may be used to improve the environmental sustainability of commercial shellfish aquaculture while simultaneously enhancing its growth and profitability in the face of ever-changing political and ecological landscapes.

Keywords: aquaculture, environmental DNA, shellfish, sustainability

Procedia PDF Downloads 245
1108 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 130
1107 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material

Authors: Rossella Resi

Abstract:

This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.

Keywords: ICT, L2, language learning, language mediation, subtitling

Procedia PDF Downloads 414
1106 Manifestations of Moral Imagination during the COVID-19 Pandemic in the Debates of Lithuanian Parliament

Authors: Laima Zakaraite, Vaidas Morkevicius

Abstract:

The COVID-19 pandemic brought important and pressing challenges for politicians around the world. Governments, parliaments, and political leaders had to make quick decisions about containment of the pandemic, usually without clear knowledge about the factual spread of the virus, the possible expected speed of spread, and levels of mortality. Opinions of experts were also divided, as some advocated for ‘herd immunity’ without closing down the economy and public life, and others supported the idea of strict lockdown. The debates about measures of pandemic containment were heated and involved strong moral tensions with regard to the possible outcomes. This paper proposes to study the manifestations of moral imagination in the political debates regarding the COVID-19 pandemic. Importantly, moral imagination is associated with twofold abilities of a decision-making actor: the ability to discern the moral aspects embedded within a situation and the ability to envision a range of possibilities alternative solutions to the situation from a moral perspective. The concept was most thoroughly investigated in business management studies. However, its relevance for the study of political decision-making is also rather clear. The results of the study show to what extent politicians are able to discern the wide range of moral issues related to a situation (in this case, consequences of COVID-19 pandemic in a country) and how broad (especially, from a moral perspective) are discussions of the possible solutions proposed for solving the problem (situation). Arguably, political discussions and considerations are broader and affected by a wider and more varied range of actors and ideas compared to decision making in the business management sector. However, the debates and ensuing solutions may also be restricted by ideological maxims and advocacy of special interests. Therefore, empirical study of policy proposals and their debates might reveal the actual breadth of moral imagination in political discussions. For this purpose, we carried out the qualitative study of the parliamentary debates related to the COVID-19 pandemic in Lithuania during the first wave (containment of which was considered very successful) and at the beginning and consequent acceleration of the second wave (when the spread of the virus became uncontrollable).

Keywords: decision making, moral imagination, political debates, political decision

Procedia PDF Downloads 146
1105 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 70
1104 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 450
1103 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations

Authors: Yanjie Zhu, André Jesus, Irwanda Laory

Abstract:

Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.

Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)

Procedia PDF Downloads 303
1102 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 322
1101 Colour and Travel: Design of an Innovative Infrastructure for Travel Applications with Entertaining and Playful Features

Authors: Avrokomi Zavitsanou, Spiros Papadopoulos, Theofanis Alexandridis

Abstract:

This paper presents the research project ‘Colour & Travel’, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020, under the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The research project proposes the design of an innovative, playful framework for exploring a variety of travel destinations and creating personalised travel narratives, aiming to entertain, educate, and promote culture and tourism. Gamification of the cultural and touristic environment can enhance its experiential, multi-sensory aspects and broaden the perception of the traveler. The latter's involvement in creating and shaping his personal travel narrations and the possibility of sharing it with others can offer him an alternative, more binding way of getting acquainted with a place. In particular, the paper presents the design of an infrastructure: (a) for the development of interactive travel guides for mobile devices, where sites with specific points of interest will be recommended, with which the user can interact in playful ways and then create his personal travel narratives, (b) for the development of innovative games within virtual reality environment, where the interaction will be offered while the user is moving within the virtual environment; and (c) for an online application where the content will be offered through the browser and the modern 3D imaging technologies (WebGL). The technological products that will be developed within the proposed project can strengthen important sectors of economic and social life, such as trade, tourism, exploitation and promotion of the cultural environment, creative industries, etc. The final applications delivered at the end of the project will guarantee an improved level of service for visitors and will be a useful tool for content creators with increased adaptability, expansibility, and applicability in many regions of Greece and abroad. This paper aims to present the research project by referencing the state of the art and the methodological scheme, ending with a brief reflection on the expected outcome in terms of results.

Keywords: gamification, culture, tourism, AR, VR, applications

Procedia PDF Downloads 142
1100 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 156
1099 How Unpleasant Emotions, Morals and Normative Beliefs of Severity Relate to Cyberbullying Intentions

Authors: Paula C. Ferreira, Ana Margarida Veiga Simão, Nádia Pereira, Aristides Ferreira, Alexandra Marques Pinto, Alexandra Barros, Vitor Martinho

Abstract:

Cyberbullying is a phenomenon of worldwide concern regarding children and adolescents’ mental health and risk behavior. Bystanders of this phenomenon can help diminish the incidence of this phenomenon if they engage in pro-social behavior. However, different social-cognitive and affective bystander reactions may surface because of the lack of contextual information and emotional cues in cyberbullying situations. Hence, this study investigated how cyberbullying bystanders’ unpleasant emotions could be related to their personal moral beliefs and their behavioral intentions to cyberbully or defend the victim. It also proposed to investigate how their normative beliefs of perceived severity about cyberbullying behavior could be related to their personal moral beliefs and their behavioral intentions. Three groups of adolescents participated in this study, namely a first of group 402 students (5th – 12th graders; Mage = 13.12; SD = 2.19; 55.7% girls) to compute explorative factorial analyses of the instruments used; a second group of 676 students (5th – 12th graders; Mage = 14.10; SD = 2.74; 55.5% were boys) to run confirmatory factor analyses; and a third group (N = 397; 5th – 12th graders; Mage = 13.88 years; SD = 1.45; 55.5% girls) to perform the main analyses to test the research hypotheses. Self-report measures were used, such as the Personal moral beliefs about cyberbullying behavior questionnaire, the Normative beliefs of perceived severity about cyberbullying behavior questionnaire, the Unpleasant emotions about cyberbullying incidents questionnaires, and the Bystanders’ behavioral intentions in cyberbullying situations questionnaires. Path analysis results revealed that unpleasant emotions were mediators of the relationship between adolescent cyberbullying bystanders’ personal moral beliefs and their intentions to help the victims in cyberbullying situations. Moreover, adolescent cyberbullying bystanders’ normative beliefs of gravity were mediators of the relationship between their personal moral beliefs and their intentions to cyberbully others. These findings provide insights for the development of prevention and intervention programs that promote social and emotional learning strategies as a means to prevent and intervene in cyberbullying.

Keywords: cyberbullying, normative beliefs of perceived severity, personal moral beliefs, unpleasant emotions

Procedia PDF Downloads 214
1098 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 166
1097 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 158
1096 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach

Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier

Abstract:

²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.

Keywords: colloids, mining context, radium, transport

Procedia PDF Downloads 155
1095 Balancing Electricity Demand and Supply to Protect a Company from Load Shedding: A Review

Authors: G. W. Greubel, A. Kalam

Abstract:

This paper provides a review of the technical problems facing the South African electricity system and discusses a hypothetical ‘virtual grid’ concept that may assist in solving the problems. The proposed solution has potential application across emerging markets with constrained power infrastructure or for companies who wish to be entirely powered by renewable energy. South Africa finds itself at a confluence of forces where the national electricity supply system is constrained with under-supply primarily from old and failing coal-fired power stations and congested and inadequate transmission and distribution systems. Simultaneously, the country attempts to meet carbon reduction targets driven by both an alignment with international goals and a consumer-driven requirement. The constrained electricity system is an aspect of an economy characterized by very low economic growth, high unemployment, and frequent and significant load shedding. The fiscus does not have the funding to build new generation capacity or strengthen the grid. The under-supply is increasingly alleviated by the penetration of wind and solar generation capacity and embedded roof-top solar. However, this increased penetration results in less inertia, less synchronous generation, and less capability for fast frequency response, with resultant instability. The renewable energy facilities assist in solving the under-supply issues but merely ‘kick the can down the road’ by not contributing to grid stability or by substituting the lost inertia, thus creating an expanding issue for the grid to manage. By technically balancing its electricity demand and supply a company with facilities located across the country can be protected from the effects of load shedding, and thus ensure financial and production performance, protect jobs, and contribute meaningfully to the economy. By treating the company’s load (across the country) and its various distributed generation facilities as a ‘virtual grid’, which by design will provide ancillary services to the grid one is able to create a win-win situation for both the company and the grid.

Keywords: load shedding, renewable energy integration, smart grid, virtual grid, virtual power plant

Procedia PDF Downloads 56
1094 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 195
1093 Arsenic (III) Removal by Zerovalent Iron Nanoparticles Synthesized with the Help of Tea Liquor

Authors: Tulika Malviya, Ritesh Chandra Shukla, Praveen Kumar Tandon

Abstract:

Traditional methods of synthesis are hazardous for the environment and need nature friendly processes for the treatment of industrial effluents and contaminated water. Use of plant parts for the synthesis provides an efficient alternative method. In this paper, we report an ecofriendly and nonhazardous biobased method to prepare zerovalent iron nanoparticles (ZVINPs) using the liquor of commercially available tea. Tea liquor as the reducing agent has many advantages over other polymers. Unlike other polymers, the polyphenols present in tea extract are nontoxic and water soluble at room temperature. In addition, polyphenols can form complexes with metal ions and thereafter reduce the metals. Third, tea extract contains molecules bearing alcoholic functional groups that can be exploited for reduction as well as stabilization of the nanoparticles. Briefly, iron nanoparticles were prepared by adding 2.0 g of montmorillonite K10 (MMT K10) to 5.0 mL of 0.10 M solution of Fe(NO3)3 to which an equal volume of tea liquor was then added drop wise over 20 min with constant stirring. The color of the mixture changed from whitish yellow to black, indicating the formation of iron nanoparticles. The nanoparticles were adsorbed on montmorillonite K10, which is safe and aids in the separation of hazardous arsenic species simply by filtration. Particle sizes ranging from 59.08±7.81 nm were obtained which is confirmed by using different instrumental analyses like IR, XRD, SEM, and surface area studies. Removal of arsenic was done via batch adsorption method. Solutions of As(III) of different concentrations were prepared by diluting the stock solution of NaAsO2 with doubly distilled water. The required amount of in situ prepared ZVINPs supported on MMT K10 was added to a solution of desired strength of As (III). After the solution had been stirred for the preselected time, the solid mass was filtered. The amount of arsenic [in the form of As (V)] remaining in the filtrate was measured using ion chromatograph. Stirring of contaminated water with zerovalent iron nanoparticles supported on montmorillonite K10 for 30 min resulted in up to 99% removal of arsenic as As (III) from its solution at both high and low pH (2.75 and 11.1). It was also observed that, under similar conditions, montmorillonite K10 alone provided only <10% removal of As(III) from water. Adsorption at low pH with precipitation at higher pH has been proposed for As(III) removal.

Keywords: arsenic removal, montmorillonite K10, tea liquor, zerovalent iron nanoparticles

Procedia PDF Downloads 128
1092 Causal-Explanatory Model of Academic Performance in Social Anxious Adolescents

Authors: Beatriz Delgado

Abstract:

Although social anxiety is one of the most prevalent disorders in adolescents and causes considerable difficulties and social distress in those with the disorder, to date very few studies have explored the impact of social anxiety on academic adjustment in student populations. The aim of this study was analyze the effect of social anxiety on school functioning in Secondary Education. Specifically, we examined the relationship between social anxiety and self-concept, academic goals, causal attributions, intellectual aptitudes, and learning strategies, personality traits, and academic performance, with the purpose of creating a causal-explanatory model of academic performance. The sample consisted of 2,022 students in the seven to ten grades of Compulsory Secondary Education in Spain (M = 13.18; SD = 1.35; 51.1% boys). We found that: (a) social anxiety has a direct positive effect on internal attributional style, and a direct negative effect on self-concept. Social anxiety also has an indirect negative effect on internal causal attributions; (b) prior performance (first academic trimester) exerts a direct positive effect on intelligence, achievement goals, academic self-concept, and final academic performance (third academic trimester), and a direct negative effect on internal causal attributions. It also has an indirect positive effect on causal attributions (internal and external), learning goals, achievement goals, and study strategies; (c) intelligence has a direct positive effect on learning goals and academic performance (third academic trimester); (d) academic self-concept has a direct positive effect on internal and external attributional style. Also, has an indirect effect on learning goals, achievement goals, and learning strategies; (e) internal attributional style has a direct positive effect on learning strategies and learning goals. Has a positive but indirect effect on achievement goals and learning strategies; (f) external attributional style has a direct negative effect on learning strategies and learning goals and a direct positive effect on internal causal attributions; (g) learning goals have direct positive effect on learning strategies and achievement goals. The structural equation model fit the data well (CFI = .91; RMSEA = .04), explaining 93.8% of the variance in academic performance. Finally, we emphasize that the new causal-explanatory model proposed in the present study represents a significant contribution in that it includes social anxiety as an explanatory variable of cognitive-motivational constructs.

Keywords: academic performance, adolescence, cognitive-motivational variables, social anxiety

Procedia PDF Downloads 330
1091 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia PDF Downloads 182
1090 Robotic Exoskeleton Response During Infant Physiological Knee Kinematics

Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno

Abstract:

Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.

Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics

Procedia PDF Downloads 115
1089 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 307