Search results for: pick up points
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2550

Search results for: pick up points

2130 Educational Turn towards Digitalization by Changing Leadership, Networks and Qualification Concepts

Authors: Patricia Girrbach

Abstract:

Currently, our society is facing a new and incremental upheaval technological revolution named digitalization. In order to face the relating challenges organizations have to be prepared. They need appropriate circumstances in order to cope with current issues concerning digital transformation processes. Nowadays digitalization emerged as top issues for companies and business leaders. In this context, it is a pressure on companies to have a positive, productive digital culture. And indeed, Organizations realize that they need to address this important issue. In this context 87 percent of organizations quote culture and engagement as one of their top challenges in terms of any change process, but especially in terms of the digital turn. Executives can give their company a competitive advantage and attract top talent by having a strong workplace culture that supports digitalization. Many current studies attest that fact. Digital-oriented companies can hire more easily, they have the lowest voluntary turnover rates, deliver better customer service, and are more profitable over the long run. Based on this background it is important to provide companies starting points and practical measurements how to reach this goal. The major findings are that firms need to make sense out of digitalization. In this context, they should focus on internal but also on external stakeholders. Furthermore, they should create certain working conditions and they should support the qualification of employees, e.g. by Virtual Reality. These measurements can create positive experiences in terms of digitalization in order to ensure the support of stuff in terms of the digital turn. Based on several current studies and literature research this paper provides concrete measurements for companies in order to enable the digital turn. Therefore, the aim of this paper is providing possible practical starting points which support both the education of employees by digitalization as well as the digital turn itself within the organization.

Keywords: digitalization, industry 4.0, education 4.0, virtual reality

Procedia PDF Downloads 160
2129 Fractal Analysis of Some Bifurcations of Discrete Dynamical Systems in Higher Dimensions

Authors: Lana Horvat Dmitrović

Abstract:

The main purpose of this paper is to study the box dimension as fractal property of bifurcations of discrete dynamical systems in higher dimensions. The paper contains the fractal analysis of the orbits near the hyperbolic and non-hyperbolic fixed points in discrete dynamical systems. It is already known that in one-dimensional case the orbit near the hyperbolic fixed point has the box dimension equal to zero. On the other hand, the orbit near the non-hyperbolic fixed point has strictly positive box dimension which is connected to the non-degeneracy condition of certain bifurcation. One of the main results in this paper is the generalisation of results about box dimension near the hyperbolic and non-hyperbolic fixed points to higher dimensions. In the process of determining box dimension, the restriction of systems to stable, unstable and center manifolds, Lipschitz property of box dimension and the notion of projective box dimension are used. The analysis of the bifurcations in higher dimensions with one multiplier on the unit circle is done by using the normal forms on one-dimensional center manifolds. This specific change in box dimension of an orbit at the moment of bifurcation has already been explored for some bifurcations in one and two dimensions. It was shown that specific values of box dimension are connected to appropriate bifurcations such as fold, flip, cusp or Neimark-Sacker bifurcation. This paper further explores this connection of box dimension as fractal property to some specific bifurcations in higher dimensions, such as fold-flip and flip-Neimark-Sacker. Furthermore, the application of the results to the unit time map of continuous dynamical system near hyperbolic and non-hyperbolic singularities is presented. In that way, box dimensions which are specific for certain bifurcations of continuous systems can be obtained. The approach to bifurcation analysis by using the box dimension as specific fractal property of orbits can lead to better understanding of bifurcation phenomenon. It could also be useful in detecting the existence or nonexistence of bifurcations of discrete and continuous dynamical systems.

Keywords: bifurcation, box dimension, invariant manifold, orbit near fixed point

Procedia PDF Downloads 255
2128 Laser Therapy in Patients with Rheumatoid Arthritis: A Clinical Trial

Authors: Joao Paulo Matheus, Renan Fangel

Abstract:

Rheumatoid arthritis is a chronic, inflammatory, systemic and progressive disease that affects the synovial joints bilaterally, causing definitive orthopedic damage. It has a higher prevalence in postmenopausal female patients. It is a disabling disease that causes joint deformities that may compromise the functionality of the affected segment. The aim of this study was to evaluate the influence of low-intensity therapeutic laser on the perception of pain and quality of life in patients with rheumatoid arthritis. This is a randomized clinical study involving 6 women with a mean age of 56.8+6.3 years. Exclusion criteria: patients with acute pain, chronic infectious disease, underlying acute or chronic underlying disease. An AsGaAl laser with 808nm wavelength, 100mW power, beam output area of 0.028cm2, power density of 3.57W/cm2 was used. The laser was applied at pre-defined points in the interphalangeal and metacarpophalangeal joints, totaling 24 points, 2 times a week, for 4 weeks, totaling 8 sessions. The Pain Inventory (IBD) and Visual Analogue Scale (VAS) were used for the analysis of pain and for the WHOQOL-bref quality of life assessment. There was no statistical difference between the onset (5.67±2.66) and the final (4.67±3.78) of treatments (p=0.70). There was also no statistical difference between the beginning (5.67±2.66) and the final (4.67±3.78) of the treatments in the VAS analysis (p=0.68). The overall mean quality of life obtained by the questionnaire at the start of treatment was 42.3±7.6, while at the end of treatment it was 58.5±7.6 (p=0.01) and the domains of the questionnaire with significant differences were: psychological domain 42.9±6.8 and 66.7±12.9 (p=0.004), social domain 39.9±5.7 and 68.1±6.3 (p=0,0005) and environmental domain 36.3±7.3 and 56.3±12.5 (p=0.003). It can be concluded that the low-intensity therapeutic laser did not produce significant changes in the painful period of rheumatoid arthritis patients. However, there was an improvement in patients' quality of life in the psychological, social and environmental aspects.

Keywords: laser therapy, pain, quality of life, rheumatoid arthritis

Procedia PDF Downloads 251
2127 Diagnosing Depression during Pregnancy-Identifying Risk Factors of Prenatal Depression in Polish Women

Authors: Olga Plaza, Katarzyna Kosinska-Kaczynska, Stepan Feduniw, Dominika Pazdzior, Kinga Zebrowska, Katarzyna Kwiatkowska

Abstract:

Introduction: The main causes of depression among pregnant women remain unclear. However, it is clear that pregnancy carries a higher risk of depression occurrence. Left untreated, prenatal depression can be a cause of serious both maternal and neonatal complications. Aim of the study: The aim of the study was to define potential risk factors of prenatal depression and to assess the frequency of its occurrence among pregnant women. Material and Methods: A prospective cross-sectional study was performed among 346 women. The self- composed questionnaire consisting of 46 questions, was distributed via the Internet between November 2017 and March 2018. The questionnaire contained the Edinburgh Postnatal Depression Scale (EPDS), in which the results of 13 and more points (out of 30) suggested possible prenatal depression. Statistical analysis was performed with Chi2 Pearson. P value < 0.05 was considered significant. Results: 37.57% (n=130) of women had a score of 13 or more points. Women with depressive symptoms (DS) reported lack of support from the partner (46.9% vs. 16.2%; p < 0.001) as well as other family members (40.8% vs. 14.4%; p < 0.001), current pregnancy being unplanned (21.5% vs. 12.5%; p=0.014) and low socio-economic status (10% vs. 0.9%; p < 0.001). Both early and advanced maternal age seemed to play a role in occurrence of DS: in women aged 17-24 40.8% declared symptoms (vs 28.7%; p < 0.01), in mothers aged ≥37 6.2% did (vs 0.5%; p < 0.001). Smoking during pregnancy was also more frequent among patients with DS (31.5% vs. 18.1%; p=0.004). Previous diagnosis of depression or other mood disorders significantly increased a chance of DS occurrence (respectively- 17.7% vs. 4.6%; p < 0.001 and 49.2% vs. 25%; p<0.001). Parental diagnosis of mood disorders and other mental disorders was also more frequent in this group of patients (respectively- 24.6% vs. 15.7%; p= 0.026 and 26.4% vs. 9.7%; p < 0.001). Only 23.8% of women with DS sought help from healthcare professionals, with 21.5% receiving pharmacological treatment. Conclusions: Pregnant women often report having DS. Evaluation of risk factors of DS and possible prenatal depression is essential in proper screening for depression among pregnant women.

Keywords: obstetrics, polish women, prenatal care, prenatal depression, risk factors

Procedia PDF Downloads 213
2126 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 340
2125 Aquatic Sediment and Honey of Apis mellifera as Bioindicators of Pesticide Residues

Authors: Luana Guerra, Silvio C. Sampaio, Vladimir Pavan Margarido, Ralpho R. Reis

Abstract:

Brazil is the world's largest consumer of pesticides. The excessive use of these compounds has negative impacts on animal and human life, the environment, and food security. Bees, crucial for pollination, are exposed to pesticides during the collection of nectar and pollen, posing risks to their health and the food chain, including honey contamination. Aquatic sediments are also affected, impacting water quality and the microbiota. Therefore, the analysis of aquatic sediments and bee honey is essential to identify environmental contamination and monitor ecosystems. The aim of this study was to use samples of honey from honeybees (Apis mellifera) and aquatic sediment as bioindicators of environmental contamination by pesticides and their relationship with agricultural use in the surrounding areas. The sample collections of sediment and honey were carried out in two stages. The first stage was conducted in the Bituruna municipality region in the second half of the year 2022, and the second stage took place in the regions of Laranjeiras do Sul, Quedas do Iguaçu, and Nova Laranjeiras in the first half of the year 2023. In total, 10 collection points were selected, with 5 points in the first stage and 5 points in the second stage, where one sediment sample and one honey sample were collected for each point, totaling 20 samples. The honey and sediment samples were analyzed at the Laboratory of the Paraná Institute of Technology, with ten samples of honey and ten samples of sediment. The selected extraction method was QuEChERS, and the analysis of the components present in the sample was performed using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The pesticides Azoxystrobin, Epoxiconazole, Boscalid, Carbendazim, Haloxifope, Fomesafen, Fipronil, Chlorantraniliprole, Imidacloprid, and Bifenthrin were detected in the sediment samples from the study area in Laranjeiras do Sul, Paraná, with Carbendazim being the compound with the highest concentration (0.47 mg/kg). The honey samples obtained from the apiaries showed satisfactory results, as they did not show any detection or quantification of the analyzed pesticides, except for Point 9, which had the fungicide tebuconazole but with a concentration Keywords: contamination, water research, agrochemicals, beekeeping activity

Procedia PDF Downloads 37
2124 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 33
2123 Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant

Authors: Samsher Gautam, Apoorva Roy, Bhuvan Aggarwal

Abstract:

Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.

Keywords: [bmim][PF₆] as absorbent, carbon dioxide as refrigerant, exergy analysis, Peng-Robinson equations of state, vapor absorption refrigeration

Procedia PDF Downloads 289
2122 Formation of in-situ Ceramic Phase in N220 Nano Carbon Containing Low Carbon Mgo-C Refractory

Authors: Satyananda Behera, Ritwik Sarkar

Abstract:

In iron and steel industries, MgO–C refractories are widely used in basic oxygen furnaces, electric arc furnaces and steel ladles due to their excellent corrosion resistance, thermal shock resistance, and other excellent hot properties. Conventionally magnesia carbon refractories contain about 8-20 wt% of carbon but the use of carbon is also associate with disadvantages like oxidation, low fracture strength, high heat loss and higher carbon pick up in steel. So, MgO-C refractory having low carbon content without compromising the beneficial properties is the challenge. Nano carbon, having finer particles, can mix and distribute within the entire matrix uniformly and can result in improved mechanical, thermo-mechanical, corrosion and other refractory properties. Previous experiences with the use of nano carbon in low carbon MgO-C refractory have indicated an optimum range of use of nano carbon around 1 wt%. This optimum nano carbon content was used in MgO-C compositions with flaky graphite followed by aluminum and silicon metal powder as an anti-oxidant. These low carbon MgO-C refractory compositions were prepared by conventional manufacturing techniques. At the same time 16 wt. % flaky graphite containing conventional MgO-C refractory was also prepared parallel under similar conditions. The developed products were characterized for various refractory related properties. Nano carbon containing compositions showed better mechanical, thermo-mechanical properties, and oxidation resistance compared to that of conventional composition. Improvement in the properties is associated with the formation of in-situ ceramic phase-like aluminum carbide, silicon carbide, and magnesium aluminum spinel. Higher surface area and higher reactivity of N220 nano carbon black resulted in greater formation in-situ ceramic phases, even at a much lower amount. Nano carbon containing compositions were found to have improved properties in MgO-C refractories compared to that of the conventional ones at much lower total carbon content.

Keywords: N220nano carbon black, refractory properties, conventionally manufacturing techniques, conventional magnesia carbon refractories

Procedia PDF Downloads 367
2121 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 139
2120 Charter versus District Schools and Student Achievement: Implications for School Leaders

Authors: Kara Rosenblatt, Kevin Badgett, James Eldridge

Abstract:

There is a preponderance of information regarding the overall effectiveness of charter schools and their ability to increase academic achievement compared to traditional district schools. Most research on the topic is focused on comparing long and short-term outcomes, academic achievement in mathematics and reading, and locale (i.e., urban, v. Rural). While the lingering unanswered questions regarding effectiveness continue to loom for school leaders, data on charter schools suggests that enrollment increases by 10% annually and that charter schools educate more than 2 million U.S. students across 40 states each year. Given the increasing share of U.S. students educated in charter schools, it is important to better understand possible differences in student achievement defined in multiple ways for students in charter schools and for those in Independent School District (ISD) settings in the state of Texas. Data were retrieved from the Texas Education Agency’s (TEA) repository that includes data organized annually and available on the TEA website. Specific data points and definitions of achievement were based on characterizations of achievement found in the relevant literature. Specific data points include but were not limited to graduation rate, student performance on standardized testing, and teacher-related factors such as experience and longevity in the district. Initial findings indicate some similarities with the current literature on long-term student achievement in English/Language Arts; however, the findings differ substantially from other recent research related to long-term student achievement in social studies. There are a number of interesting findings also related to differences between achievement for students in charters and ISDs and within different types of charter schools in Texas. In addition to findings, implications for leadership in different settings will be explored.

Keywords: charter schools, ISDs, student achievement, implications for PK-12 school leadership

Procedia PDF Downloads 129
2119 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 267
2118 Strategic Policy Formulation to Ensure the Atlantic Forest Regeneration

Authors: Ramon F. B. da Silva, Mateus Batistella, Emilio Moran

Abstract:

Although the existence of two Forest Transition (FT) pathways, the economic development and the forest scarcity, there are many contexts that shape the model of FT observed in each particular region. This means that local conditions, such as relief, soil quality, historic land use/cover, public policies, the engagement of society in compliance with legal regulations, and the action of enforcement agencies, represent dimensions which combined, creates contexts that enable forest regeneration. From this perspective we can understand the regeneration process of native vegetation cover in the Paraíba Valley (Forest Atlantic biome), ongoing since the 1960s. This research analyzed public information, land use/cover maps, environmental public policies, and interviewed 17 stakeholders from the Federal and State agencies, municipal environmental and agricultural departments, civil society, farmers, aiming comprehend the contexts behind the forest regeneration in the Paraíba Valley, Sao Paulo State, Brazil. The first policy to protect forest vegetation was the Forest Code n0 4771 of 1965, but this legislation did not promote the increase of forest, just the control of deforestation, not enough to the Atlantic Forest biome that reached its highest pick of degradation in 1985 (8% of Atlantic Forest remnants). We concluded that the Brazilian environmental legislation acted in a strategic way to promote the increase of forest cover (102% of regeneration between 1985 and 2011) from 1993 when the Federal Decree n0 750 declared the initial and advanced stages of secondary succession protected against any kind of exploitation or degradation ensuring the forest regeneration process. The strategic policy formulation was also observed in the Sao Paulo State law n0 6171 of 1988 that prohibited the use of fire to manage agricultural landscape, triggering a process of forest regeneration in formerly pasture areas.

Keywords: forest transition, land abandonment, law enforcement, rural economic crisis

Procedia PDF Downloads 555
2117 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism

Authors: Daphne Fietz

Abstract:

This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.

Keywords: abjection, discourse, relationality, the visceral, whiteness

Procedia PDF Downloads 158
2116 Structural Design for Effective Load Balancing of the Iron Frame in Manhole Lid

Authors: Byung Il You, Ryun Oh, Gyo Woo Lee

Abstract:

Manhole refers to facilities that are accessible to the people cleaning and inspection of sewer, and its covering is called manhole lid. Manhole lid is typically made of a cast iron material. Due to the heavy weight of the cast iron manhole lids their installation and maintenance are not easy, and an electrical shock and corrosion aging of them can cause critical problems. The manhole body and the lid manufacturing using the fiber-reinforced composite material can reduce the weight considerably compared to the cast iron manhole. But only the fiber reinforcing is hard to maintain the heavy load, and the method of the iron frame with double injection molding of the composite material has been proposed widely. In this study reflecting the situation of this market, the structural design of the iron frame for the composite manhole lid was carried out. Structural analysis with the computer simulation for the effectively distributed load on the iron frame was conducted. In addition, we want to assess manufacturing costs through the comparing of weights and number of welding spots of the frames. Despite the cross-sectional area is up to 38% compared with the basic solid form the maximum von Mises stress is increased at least about 7 times locally near the rim and the maximum strain in the central part of the lid is about 5.5 times. The number of welding points related to the manufacturing cost was increased gradually with the more complicated shape. Also, the higher the height of the arch in the center of the lid the better result might be obtained. But considering the economic aspect of the composite fabrication we determined the same thickness as the frame for the height of the arch at the center of the lid. Additionally in consideration of the number of the welding points we selected the hexagonal as the optimal shape. Acknowledgment: These are results of a study on the 'Leaders Industry-university Cooperation' Project, supported by the Ministry of Education (MOE).

Keywords: manhole lid, iron frame, structural design, computer simulation

Procedia PDF Downloads 275
2115 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes

Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld

Abstract:

Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.

Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes

Procedia PDF Downloads 172
2114 Hydro Geochemistry and Water Quality in a River Affected by Lead Mining in Southern Spain

Authors: Rosendo Mendoza, María Carmen Hidalgo, María José Campos-Suñol, Julián Martínez, Javier Rey

Abstract:

The impact of mining environmental liabilities and mine drainage on surface water quality has been investigated in the hydrographic basin of the La Carolina mining district (southern Spain). This abandoned mining district is characterized by the existence of important mineralizations of sulfoantimonides of Pb - Ag, and sulfides of Cu - Fe. All surface waters reach the main river of this mining area, the Grande River, which ends its course in the Rumblar reservoir. This waterbody is intended to supply 89,000 inhabitants, as well as irrigation and livestock. Therefore, the analysis and control of the metal(loid) concentration that exists in these surface waters is an important issue because of the potential pollution derived from metallic mining. A hydrogeochemical campaign consisting of 20 water sampling points was carried out in the hydrographic network of the Grande River, as well as two sampling points in the Rumbler reservoir and at the main tailings impoundment draining to the river. Although acid mine drainage (pH below 4) is discharged into the Grande river from some mine adits, the pH values in the river water are always neutral or slightly alkaline. This is mainly the result of a dilution process of the small volumes of mine waters by net alkaline waters of the river. However, during the dry season, the surface waters present high mineralization due to a constant discharge from the abandoned flooded mines and a decrease in the contribution of surface runoff. The concentrations of dissolved Cd and Pb in the water reach values of 2 and 81 µg/l, respectively, exceeding the limit established by the Environmental Quality Standard for surface water. In addition, the concentrations of dissolved As, Cu, and Pb in the waters of the Rumblar reservoir reached values of 10, 20, and 11 µg/l, respectively. These values are higher than the maximum allowable concentration for human consumption, a circumstance that is especially alarming.

Keywords: environmental quality, hydrogeochemistry, metal mining, surface water

Procedia PDF Downloads 144
2113 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 367
2112 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 212
2111 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 109
2110 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes

Authors: Chih-Jer Lin, Jian-Hong Hou

Abstract:

Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.

Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance

Procedia PDF Downloads 149
2109 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 314
2108 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India

Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel

Abstract:

Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.

Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM

Procedia PDF Downloads 248
2107 An Efficient Robot Navigation Model in a Multi-Target Domain amidst Static and Dynamic Obstacles

Authors: Michael Ayomoh, Adriaan Roux, Oyindamola Omotuyi

Abstract:

This paper presents an efficient robot navigation model in a multi-target domain amidst static and dynamic workspace obstacles. The problem is that of developing an optimal algorithm to minimize the total travel time of a robot as it visits all target points within its task domain amidst unknown workspace obstacles and finally return to its initial position. In solving this problem, a classical algorithm was first developed to compute the optimal number of paths to be travelled by the robot amidst the network of paths. The principle of shortest distance between robot and targets was used to compute the target point visitation order amidst workspace obstacles. Algorithm premised on the standard polar coordinate system was developed to determine the length of obstacles encountered by the robot hence giving room for a geometrical estimation of the total surface area occupied by the obstacle especially when classified as a relevant obstacle i.e. obstacle that lies in between a robot and its potential visitation point. A stochastic model was developed and used to estimate the likelihood of a dynamic obstacle bumping into the robot’s navigation path and finally, the navigation/obstacle avoidance algorithm was hinged on the hybrid virtual force field (HVFF) method. Significant modelling constraints herein include the choice of navigation path to selected target points, the possible presence of static obstacles along a desired navigation path and the likelihood of encountering a dynamic obstacle along the robot’s path and the chances of it remaining at this position as a static obstacle hence resulting in a case of re-routing after routing. The proposed algorithm demonstrated a high potential for optimal solution in terms of efficiency and effectiveness.

Keywords: multi-target, mobile robot, optimal path, static obstacles, dynamic obstacles

Procedia PDF Downloads 281
2106 Experimental Analyses of Thermoelectric Generator Behavior Using Two Types of Thermoelectric Modules for Marine Application

Authors: A. Nour Eddine, D. Chalet, L. Aixala, P. Chessé, X. Faure, N. Hatat

Abstract:

Thermal power technology such as the TEG (Thermo-Electric Generator) arouses significant attention worldwide for waste heat recovery. Despite the potential benefits of marine application due to the permanent heat sink from sea water, no significant studies on this application were to be found. In this study, a test rig has been designed and built to test the performance of the TEG on engine operating points. The TEG device is built from commercially available materials for the sake of possible economical application. Two types of commercial TEM (thermo electric module) have been studied separately on the test rig. The engine data were extracted from a commercial Diesel engine since it shares the same principle in terms of engine efficiency and exhaust with the marine Diesel engine. An open circuit water cooling system is used to replicate the sea water cold source. The characterization tests showed that the silicium-germanium alloys TEM proved a remarkable reliability on all engine operating points, with no significant deterioration of performance even under sever variation in the hot source conditions. The performance of the bismuth-telluride alloys was 100% better than the first type of TEM but it showed a deterioration in power generation when the air temperature exceeds 300 °C. The temperature distribution on the heat exchange surfaces revealed no useful combination of these two types of TEM with this tube length, since the surface temperature difference between both ends is no more than 10 °C. This study exposed the perspective of use of TEG technology for marine engine exhaust heat recovery. Although the results suggested non-sufficient power generation from the low cost commercial TEM used, it provides valuable information about TEG device optimization, including the design of heat exchanger and the types of thermo-electric materials.

Keywords: internal combustion engine application, Seebeck, thermo-electricity, waste heat recovery

Procedia PDF Downloads 244
2105 Indicators and Sustainability Dimensions of the Mediterranean Diet

Authors: Joana Margarida Bôto, Belmira Neto, Vera Miguéis, Manuela Meireles, Ada Rocha

Abstract:

The Mediterranean diet has been recognized as a sustainable model of living with benefits for the environment and human health. However, a complete assessment of its sustainability, encompassing all dimensions and aspects, to our best knowledge, has not yet been realized. This systematic literature review aimed to fill this gap by identifying and describing the indicators used to assess the sustainability of the Mediterranean diet, looking at several dimensions, and presenting the results from their application. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines methodology was used, and searches were conducted in PubMed, Scopus, Web of Science, and GreenFile. There were identified thirty-two articles evaluating the sustainability of the Mediterranean diet. The environmental impact was quantified in twenty-five of these studies, the nutritional quality was evaluated in seven studies, and the daily cost of the diet was assessed in twelve studies. A total of thirty-three indicators were identified and separated by four dimensions of sustainability, specifically, the environmental dimension (ten indicators, namely carbon, water, and ecological footprint), the nutritional dimension (eight indicators, namely Health score and Nutrient Rich Food Index), the economic dimension (one indicator, the dietary cost), the sociocultural dimension (six indicators – with no results). Only eight of the studies used combined indicators. The Mediterranean diet was considered in all articles as a sustainable dietary pattern with a lower impact than Western diets. The carbon footprint ranged between 0.9 and 6.88 kg CO₂/d per capita, the water footprint between 600 and 5280 m³/d per capita, and the ecological footprint between 2.8 and 53.42 m²/d per capita. The nutritional quality was high, obtaining 122 points using the Health score and 12.95 to 90.6 points using the Nutrient Rich Food Index. The cost of the Mediterranean diet did not significantly differ from other diets and varied between 3.33 and 14.42€/d per capita. A diverse approach to evaluating the sustainability of the Mediterranean diet was found.

Keywords: Mediterranean diet, sustainability, environmental indicators, nutritional indicators

Procedia PDF Downloads 99
2104 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 89
2103 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil

Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins

Abstract:

Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.

Keywords: pesticides monitoring, são paulo state, water quality, surface waters

Procedia PDF Downloads 59
2102 Influence of Online Sports Events on Betting among Nigerian Youth

Authors: Babajide Olufemi Diyaolu

Abstract:

The opportunity provided by advances in technology as regards sports betting is so numerous that even at one's comfort, with the use of a phone, Nigerian youth are found engaging in all kinds of betting. Today it is more difficult to differentiate a true fan as there are quite a number of them that became fans as a result of betting on live games. This study investigated the influence of online sports events on betting among Nigerian youth. A descriptive survey research design was used, and the population consists of all Nigerian youth that engages in betting and live within the southwest zone of Nigeria. A simple random sampling technique was used to pick three states from the southwest zone of Nigeria. Two thousand five hundred respondents comprising males and female were sampled from the three states. A structured questionnaire on online sports event contribution to sports betting (OSECSB) was used. The Instrument consists of three sections. Section A seeks information on the demographic data of the respondents. Section B seeks information on online sports events, while section C is used to extract information on sports betting. The modified instrument, which consists of 14 items, has a reliability coefficient of 0.74. The hypothesis was tested at 0.05 significance level. The completed questionnaire was collated, coded, and analyzed using descriptive statistics of frequency counts, percentage and pie chart, and inferential statistics of multiple regressions. The findings of this study revealed that online sports betting is a significant predictor of an increase in sports betting among Nigerian youth. The media and television, as well as globalization and the internet coupled with social media and various online platforms, have all contributed to the immense increase in sports betting. The increase in the advertisement of the betting platform during live matches, especially football, is becoming more alarming. In most organized international events, the media attention, as well as sponsorship right, are now been given to one or two betting platforms. There is a need for all stakeholders to put in place school-based intervention programs to reorientate our youth about the consequences of addiction to betting. Such programs must include meta-analyses and emotional control towards sports betting.

Keywords: betting platform, Nigerian fans, Nigerian youth, sports betting

Procedia PDF Downloads 75
2101 The Possible Double-Edged Sword Effects of Online Learning on Academic Performance: A Quantitative Study of Preclinical Medical Students

Authors: Atiwit Sinyoo, Sekh Thanprasertsuk, Sithiporn Agthong, Pasakorn Watanatada, Shaun Peter Qureshi, Saknan Bongsebandhu-Phubhakdi

Abstract:

Background: Since the SARS-CoV-2 virus became extensively disseminated throughout the world, online learning has become one of the most hotly debated topics in educational reform. While some studies have already shown the advantage of online learning, there are still questions concerning how online learning affects students’ learning behavior and academic achievement when each student learns in a different way. Hence, we aimed to develop a guide for preclinical medical students to avoid drawbacks and get benefits from online learning that possibly a double-edged sword. Methods: We used a multiple-choice questionnaire to evaluate the learning behavior of second-year Thai medical students in the neuroscience course. All traditional face-to-face lecture classes were video-recorded and promptly posted to the online learning platform throughout this course. Students could pick and choose whatever classes they wanted to attend, and they may use online learning as often as they wished. Academic performance was evaluated as summative score, spot exam score and pre-post-test improvement. Results: More frequently students used online learning platform, the less they attended lecture classes (P = 0.035). High proactive online learners (High PO) who were irregular attendee (IrA) had significantly lower summative scores (P = 0.026), spot exam score (P = 0.012) and pre-post-test improvement (P = 0.036). In the meanwhile, conditional attendees (CoA) who only attended classes with attendance check had significantly higher summative score (P = 0.025) and spot exam score (P = 0.001) if they were in the High PO group. Conclusions: The benefit and drawbacks edges of using an online learning platform were demonstrated in our research. Based on this double-edged sword effect, we believe that online learning is a valuable learning strategy, but students must carefully plan their study schedule to gain the “benefit edge” meanwhile avoiding its “drawback edge”.

Keywords: academic performance, assessment, attendance, online learning, preclinical medical students

Procedia PDF Downloads 161