Search results for: available line transfer capability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6441

Search results for: available line transfer capability

171 Varieties of Capitalism and Small Business CSR: A Comparative Overview

Authors: Stéphanie Looser, Walter Wehrmeyer

Abstract:

Given the limited research on Small and Mediumsized Enterprises’ (SMEs) contribution to Corporate Social Responsibility (CSR) and even scarcer research on Swiss SMEs, this paper helps to fill these gaps by enabling the identification of supranational SME parameters and to make a contribution to the evolving field of these topics. Thus, the paper investigates the current state of SME practices in Switzerland and across 15 other countries. Combining the degree to which SMEs demonstrate an explicit (or business case) approach or see CSR as an implicit moral activity with the assessment of their attributes for “variety of capitalism” defines the framework of this comparative analysis. According to previous studies, liberal market economies, e.g. in the United States (US) or United Kingdom (UK), are aligned with extrinsic CSR, while coordinated market systems (in Central European or Asian countries) evolve implicit CSR agendas. To outline Swiss small business CSR patterns in particular, 40 SME owner-managers were interviewed. The transcribed interviews were coded utilising MAXQDA for qualitative content analysis. A secondary data analysis of results from different countries (i.e., Australia, Austria, Chile, Cameroon, Catalonia (notably a part of Spain that seeks autonomy), China, Finland, Germany, Hong Kong (a special administrative region of China), Italy, Netherlands, Singapore, Spain, Taiwan, UK, US) lays groundwork for this comparative study on small business CSR. Applying the same coding categories (in MAXQDA) for the interview analysis as well as for the secondary data research while following grounded theory rules to refine and keep track of ideas generated testable hypotheses and comparative power on implicit (and the lower likelihood of explicit) CSR in SMEs retrospectively. The paper identifies Swiss small business CSR as deep, profound, “soul”, and an implicit part of the day-to-day business. Similar to most Central European, Mediterranean, Nordic, and Asian countries, explicit CSR is still very rare in Swiss SMEs. Astonishingly, also UK and US SMEs follow this pattern in spite of their strong and distinct liberal market economies. Though other findings show that nationality matters this research concludes that SME culture and its informal CSR agenda are strongly formative and superseding even forces of market economies, nationally cultural patterns, and language. In a world of “big business”, explicit “business case” CSR, and the mantra that “CSR must pay”, this study points to a distinctly implicit small business CSR model built on trust, physical closeness, and virtues that is largely detached from the bottom line. This pattern holds for different cultural contexts and it is concluded that SME culture is stronger than nationality leading to a supra-national, monolithic SME CSR approach. Hence, classifications of countries by their market system or capitalism, as found in the comparative capitalism literature, do not match the CSR practices in SMEs as they do not mirror the peculiarities of their business. This raises questions on the universality and generalisability of management concepts.

Keywords: CSR, comparative study, cultures of capitalism, small, medium-sized enterprises

Procedia PDF Downloads 411
170 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals

Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova

Abstract:

Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.

Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk

Procedia PDF Downloads 223
169 The Porcine Reproductive and Respiratory Syndrome Virus Genotype 2 (PRRSV-2)-derived Oncolytic Protein Reprograms Tumor-Associated Macrophages

Authors: Farrah Putri Salmanida, Mei-Li Wu, Rika Wahyuningtyas, Wen-Bin Chung, Hso-Chi Chaung, Ko-Tung Chang

Abstract:

Within the field of immunotherapy, oncolytic virotherapy (OVT) employs dual approaches that directly eliminate tumor cells while preserving healthy ones and indirectly reprogram the tumor microenvironment (TME) to elicit antitumor responses. Within the TME, tumor associated macrophages (TAMs) manifest characteristics akin to those of anti-inflammatory M2 macrophages, thus earning the designation of M2-like TAMs. In prior research, two antigens denoted as A1 (g6Ld10T) and A3 (ORF6L5), derived from a complete sequence of ORF5 with partial sequence of ORF6 in Porcine Reproductive and Respiratory Syndrome Virus Genotype 2 (PRRSV-2), demonstrated the capacity to repolarize M2-type porcine alveolar macrophages (PAMs) into M1 phenotypes. In this study, we sought for utilizing OVT strategies by introducing A1 or A3 on TAMs to endow them with the anti-tumor traits of M1 macrophages while retaining their capacity to target cancer cells. Upon exposing human THP-1-derived M2 macrophages to a cross-species test with 2 µg/ml of either A1 or A3 for 24 hours, real time PCR revealed that A3, but not A1, treated cells exhibited upregulated gene expressions of M1 markers (CCR7, IL-1ß, CCL2, Cox2, CD80). These cells reacted to virus-derived antigen, as evidenced by increased expression of pattern-recognition receptors TLR3, TLR7, and TLR9, subsequently providing feedback in the form of type I interferon responses like IFNAR1, IFN-ß, IRF3, IRF7, OAS1, Mx1, and ISG15. Through an MTT assay, only after 15 µg/ml of A3 treatment could the cell viability decrease, with a predicted IC50 of 16.96 µg/ml. Interestingly, A3 caused dose-dependent toxicity to a rat C6 glial cancer cell line even at doses as low as 2.5 µg/ml and reached its IC50 at 9.419 µg/ml. Using Annexin V/7AAD staining and PCR test, we deduced that a significant proportion of C6 cells were undergoing the early apoptosis phase predominantly through the intrinsic apoptosis cascade involving Bcl-2 family proteins. Following this stage, we conducted a test on A3’s repolarization ability, which revealed a significant rise in M1 gene expression markers, such as TNF, CD80, and IL-1ß, in M2-like TAMs generated in vitro from murine RAW264.7 macrophages grown with conditioned medium of 4T1 breast cancer cells. This was corroborated by the results of transcriptome analysis, which revealed that the primary subset among the top 10 to top 30 significantly upregulated differentially expressed genes (DEGs) dominantly consisted of M1 macrophages profiles, including Ccl3, Ccl4, Csf3, TNF, Bcl6b, Stc1, and Dusp2. Our findings unveiled the remarkable potential of the PRRSV-derived antigen A3 to repolarize macrophages while also being capable of selectively inducing apoptosis in cancerous cells. While further in vivo study is needed for A3, it holds promise as an adjuvant by its dual effects in cancer therapy modalities.

Keywords: cancer cell apoptosis, interferon responses, macrophage repolarization, recombinant protein

Procedia PDF Downloads 39
168 Development of PCL/Chitosan Core-Shell Electrospun Structures

Authors: Hilal T. Sasmazel, Seda Surucu

Abstract:

Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.

Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold

Procedia PDF Downloads 466
167 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)

Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida

Abstract:

Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.

Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences

Procedia PDF Downloads 30
166 Groundwater Arsenic Contamination in Gangetic Jharkhand, India: Risk Implications for Human Health and Sustainable Agriculture

Authors: Sukalyan Chakraborty

Abstract:

Arsenic contamination in groundwater has been a matter of serious concern worldwide. Globally, arsenic contaminated water has caused serious chronic human diseases and in the last few decades the transfer of arsenic to human beings via food chain has gained much attention because food represents a further potential exposure pathway to arsenic in instances where crops are irrigated with high arsenic groundwater, grown in contaminated fields or cooked with arsenic laden water. In the present study, the groundwater of Sahibganj district of Jharkhand has been analysed to find the degree of contamination and its probable associated risk due to direct consumption or irrigation. The present study area comprising of three blocks, namely Sahibganj, Rajmahal and Udhwa in Sahibganj district of Jharkhand state, India, situated in the western bank of river Ganga has been investigated for arsenic contamination in groundwater, soil and crops predominantly growing in the region. Associated physicochemical parameters of groundwater including pH, temperature, electrical conductivity (EC), total dissolved solids (TDS), dissolved oxygen (DO), oxidation reduction potential (ORP), ammonium, nitrate and chloride were assessed to understand the mobilisation mechanism and chances of arsenic exposure from soil to crops and further into the food chain. Results suggested the groundwater to be dominantly Ca-HCO3- type with low redox potential and high total dissolved solids load. Major cations followed the order of Ca ˃ Na ˃ Mg ˃ K. The concentration of major anions was found in the order of HCO3− > Cl− > SO42− > NO3− > PO43− varied between 0.009 to 0.20 mg L-1. Fe concentrations of the groundwater samples were below WHO permissible limit varying between 54 to 344 µg L-1. Phosphate concentration was high and showed a significant positive correlation with arsenic. As concentrations ranged from 7 to 115 µg L-1 in premonsoon, between 2 and 98 µg L-1 in monsoon and 1 to 133µg L-1 in postmonsoon season. Arsenic concentration was found to be much higher than the WHO or BIS permissible limit in majority of the villages in the study area. Arsenic was also seen to be positively correlated with iron and phosphate. PCA results demonstrated the role of both geological condition and anthropogenic inputs to influence the water quality. Arsenic was also found to increase with depth up to 100 m from the surface. Calculation of carcinogenic and non-carcinogenic effects of the arsenic concentration in the communities exposed to the groundwater for drinking and other purpose indicated high risk with an average of more than 1 in a 1000 population. Health risk analysis revealed high to very high carcinogenic and non-carcinogenic risk for adults and children in the communities dependent on groundwater of the study area. Observation suggested the groundwater to be considerably polluted with arsenic and posing significant health risk for the exposed communities. The mobilisation mechanism of arsenic also could be identified from the results suggesting reductive dissolution of Fe oxyhydroxides due to high phosphate concentration from agricultural input arsenic release from the sediments along river Ganges.

Keywords: arsenic, physicochemical parameters, mobilisation, health effects

Procedia PDF Downloads 208
165 Scoping Review of the Potential to Embed Mental Health Impact in Global Challenges Research

Authors: Netalie Shloim, Brian Brown, Siobhan Hugh-Jones, Jane Plastow, Diana Setiyawati, Anna Madill

Abstract:

In June 2021, the World Health Organization launched its guidance and technical packages on community mental health services, stressing a human rights-based approach to care. This initiative stems from an increasing acknowledgment of the role mental health plays in achieving the Sustainable Development Goals. Nevertheless, mental health remains a relatively neglected research area and the estimates for untreated mental disorders in low-and-middle-income countries (LMICs) are as high as 78% for adults. Moreover, the development sector and research programs too often side-line mental health as a privilege in the face of often immediate threats to life and livelihood. As a way of addressing this problem, this study aimed to examine past or ongoing GCRF projects to see if there were opportunities where mental health impact could have been achieved without compromising a study's main aim and without overburdening a project. Projects funded by the UKRI Global Challenges Research Fund (GCRF) were analyzed. This program was initiated in 2015 to support cutting-edge research that addresses the challenges faced by developing countries. By the end of May 2020, a total of 15,279 projects were funded of which only 3% had an explicit mental health focus. A sample of 36 non-mental-health-focused projects was then sampled for diversity across research council, challenge portfolio and world region. Each of these 36 projects was coded by two coders for opportunities to embed mental health impact. To facilitate coding, the literature was inspected for dimensions relevant to LMIC settings. Three main psychological and three main social dimensions were identified: promote a positive sense of self; promote positive emotions, safe expression and regulation of challenging emotions, coping strategies, and help-seeking; facilitate skills development; and facilitate community-building; preserve sociocultural identity; support community mobilization. Coding agreement was strong on missed opportunities for mental health impact on the three social dimensions: support community mobilization (92%), facilitate community building (83%), preserve socio-cultural identity (70%). Coding agreement was reasonably strong on missed opportunities for mental health impact on the three psychological dimensions: promote positive emotions (67%), facilitate skills development (61%), positive sense of self (58%). In order of frequency, the agreed perceived opportunities from the highest to lowest are: support community mobilization, facilitate community building, facilitate skills development, promote a positive sense of self, promote positive emotions, preserve sociocultural identity. All projects were considered to have an opportunity to support community mobilization and to facilitate skills development by at least one coder. Findings provided support that there were opportunities to embed mental health impact in research across the range of development sectors and identifies what kind of missed opportunities are most frequent. Hence, mainstreaming mental health has huge potential to tackle the lack of priority and funding it has attracted traditionally. The next steps are to understand the barriers to mainstreaming mental health and to work together to overcome them.

Keywords: GCRF, mental health, psychosocial wellbeing, LMIC

Procedia PDF Downloads 156
164 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 130
163 Temporal Variation of Surface Runoff and Interrill Erosion in Different Soil Textures of a Semi-arid Region, Iran

Authors: Ali Reza Vaezi, Naser Fakori Ivand, Fereshteh Azarifam

Abstract:

Interrill erosion is the detachment and transfer of soil particles between the rills due to the impact of raindrops and the shear stress of shallow surface runoff. This erosion can be affected by some soil properties such as texture, amount of organic matter and stability of soil aggregates. Information on the temporal variation of interrill erosion during a rainfall event and the effect soil properties have on it can help in understanding the process of runoff production and soil loss between the rills in hillslopes. The importance of this study is especially grate in semi-arid regions, where the soil is weakly aggregated and vegetation cover is mostly poor. Therefore, this research was conducted to investigate the temporal variation of surface flow and interrill erosion and the effect of soil properties on it in some semi-arid soils. A field experiment was done in eight different soil textures under simulated rainfalls with uniform intensity. A total of twenty four plots were installed for eight study soils with three replicates in the form of a random complete block design along the land. The plots were 1.2 m (length) × 1 m (width) in dimensions which designed with a distance of 3 m from each other across the slope. Then, soil samples were purred into the plots. The plots were surrounded by a galvanized sheet, and runoff and soil erosion equipment were placed at their outlets. Rainfall simulation experiments were done using a designed portable simulator with an intensity of 60 mm per hour for 60 minutes. A plastic cover was used around the rainfall simulator frame to prevent the impact of the wind on the free fall of water drops. Runoff production and soil loss were measured during 1 hour time with 5-min intervals. In order to study soil properties, such as particle size distribution, aggregate stability, bulk density, ESP and Ks were determined in the laboratory. Correlation and regression analysis was done to determine the effect of soil properties on runoff and interrill erosion. Results indicated that the study soils have lower booth organic matter content and aggregate stability. The soils, except for coarse textured textures, are calcareous and with relatively higher exchangeable sodium percentages (ESP). Runoff production and soil loss didn’t occur in sand, which was associated with higher infiltration and drainage rates. In other study soils, interrill erosion occurred simultaneously with the generation of runoff. A strong relationship was found between interrill erosion and surface runoff (R2 = 0.75, p< 0.01). The correlation analysis showed that surface runoff was significantly affected by some soil properties consisting of sand, silt, clay, bulk density, gravel, hydraulic conductivity (Ks), lime (calcium carbonate), and ESP. The soils with lower Ks such as fine-textured soils, produced higher surface runoff and more interrill erosion. In the soils, Surface runoff production temporally increased during rainfall and finally reached a peak after about 25-35 min. Time to peak was very short (30 min) in fine-textured soils, especially clay, which was related to their lower infiltration rate.

Keywords: erosion plot, rainfall simulator, soil properties, surface flow

Procedia PDF Downloads 45
162 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 144
161 Challenges in Employment and Adjustment of Academic Expatriates Based in Higher Education Institutions in the KwaZulu-Natal Province, South Africa

Authors: Thulile Ndou

Abstract:

The purpose of this study was to examine the challenges encountered in the mediation of attracting and recruiting academic expatriates who in turn encounter their own obstacles in adjusting into and settling in their host country, host academic institutions and host communities. The none-existence of literature on attraction, placement and management of academic expatriates in the South African context has been acknowledged. Moreover, Higher Education Institutions in South Africa have voiced concerns relating to delayed and prolonged recruitment and selection processes experienced in the employment process of academic expatriates. Once employed, academic expatriates should be supported and acquainted with the surroundings, the local communities as well as be assisted to establish working relations with colleagues in order to facilitate their adjustment and integration process. Hence, an employer should play a critical role in facilitating the adjustment of academic expatriates. This mixed methods study was located in four Higher Education Institutions based in the KwaZulu-Natal province, in South Africa. The explanatory sequential design approach was deployed in the study. The merits of this approach were chiefly that it employed both the quantitative and qualitative techniques of inquiry. Therefore, the study examined and interrogated its subject from a multiplicity of quantitative and qualitative vantage points, yielding a much more enriched and enriching illumination. Mixing the strengths of both the quantitative and the qualitative techniques delivered much more durable articulation and understanding of the subject. A 5-point Likert scale questionnaire was used to collect quantitative data relating to interaction adjustment, general adjustment and work adjustment from academic expatriates. One hundred and forty two (142) academic expatriates participated in the quantitative study. Qualitative data relating to employment process and support offered to academic expatriates was collected through a structured questionnaire and semi-structured interviews. A total of 48 respondents; including, line managers, human resources practitioners, and academic expatriates participated in the qualitative study. The Independent T-test, ANOVA and Descriptive Statistics were performed to analyse, interpret and make meaning of quantitative data and thematic analysis was used to analyse qualitative data. The qualitative results revealed that academic talent is sourced from outside the borders of the country because of the academic skills shortage in almost all academic disciplines especially in the disciplines associated with Science, Engineering and Accounting. However, delays in work permit application process made it difficult to finalise the recruitment and selection process on time. Furthermore, the quantitative results revealed that academic expatriates experience general and interaction adjustment challenges associated with the use of local language and understanding of local culture. However, female academic expatriates were found to be better adjusted in the two areas as compared to male academic expatriates. Moreover, significant mean differences were found between institutions suggesting that academic expatriates based in rural areas experienced adjustment challenges differently from the academic expatriates based in urban areas. The study gestured to the need for policy revisions in the area of immigration, human resources and academic administration.

Keywords: academic expatriates, recruitment and selection, interaction and general adjustment, work adjustment

Procedia PDF Downloads 281
160 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 33
159 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 54
158 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 311
157 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks

Authors: Nicholas Aerne, John P. Parmigiani

Abstract:

There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.

Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply

Procedia PDF Downloads 182
156 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 91
155 A Systematic Review Regarding Caregiving Relationships of Adolescents Orphaned by Aids and Primary Caregivers

Authors: M. Petunia Tsweleng

Abstract:

Statement of the Problem: Research and aid organisations report that children and adolescents orphaned due to HIV and AIDS are particularly vulnerable as they are often exposed to negative effects of both HIV and AIDS and orphanhood. Without much-needed parental love, care, and support, these children and adolescents are at risk of poor developmental outcomes. A cursory look at the available literature on AIDS-orphaned adolescents, and the quality of caregiving relationships with caregivers, shows that this is a relatively under-researched terrain. This article is a review of the literature on caregiving relationships of adolescents orphaned due to AIDS and their current primary caregivers. It aims to inform community programmes and policymakers by providing insight into the qualities of these relationships. Methodology: A comprehensive search of both peer-reviewed and non-peer-reviewed literature was conducted through EBSCOhost, SpringLINK, PsycINFO, SAGE, PubMed, Elsevier ScienceDirect, JSTOR, Wiley Online Library databases, and Google Scholar. The combination of keywords used for the search were: (caregiving relationships); (orphans OR AIDS orphaned children OR AIDS orphaned adolescents); (primary caregivers); and (quality caregiving); (orphans); (HIV and AIDS). The search took place between 24 January and 28 February 2022. Both qualitative and quantitative research studies published between 2010 and 2020 were reviewed. However, only qualitative studies were selected in the end -as they presented more profound findings concerning orphan-caregiver relationships. The following three stages of meta-synthesis analysis were used to analyse data: refutational syntheses, reciprocal syntheses, and line of argument. Results: The search resulted in a total of 2090 titles, of which 750 were duplicates and therefore subtracted. The researcher reviewed all the titles and abstracts of the remaining 1340 articles. 329 articles were identified as relevant, and full texts were reviewed. Following the review of the full texts, 313 studies were excluded for relevance and 4 for methodology. Twelve articles representing 11 studies fulfilled the inclusion criteria and were selected. These studies, representing different countries across the globe, reported similar forms of hardships experienced by caregivers economically, psychosocially, and healthwise. However, the studies also show that the majority of caregivers found contentment in caring for orphans, particularly grandmother carers, and were thus enabled to provide love, care, and support despite hardships. This resulted in positive caregiving relationships -as orphans fared well emotionally and psychosocially. Some relationships, however, were found negative due to unhealed emotional wounds suffered by both caregivers and orphans and others due to the caregiver’s lack of interest in providing care. These findings were based on self-report data from both orphans and caregivers. Conclusion: Findings suggest that intervention efforts need to be intensified to: alleviate poverty in households that are affected by HIV and AIDS pandemic, strengthen the community psychosocial support programmes for orphans and their caregivers; and integrate clinical services with community programmes for the healing of emotional and psychological wounds. Contributions: Findings inform community programmes and policymakers by providing insight into the qualities of the mentioned relationships as well as identifying factors commonly associated with high-quality caregiving and poor-quality caregiving.

Keywords: systematic review, caregiving relationships, orphans and primary caregivers, AIDS

Procedia PDF Downloads 143
154 Numerical Study of Homogeneous Nanodroplet Growth

Authors: S. B. Q. Tran

Abstract:

Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.

Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth

Procedia PDF Downloads 254
153 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 161
152 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 151
151 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique

Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich

Abstract:

In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.

Keywords: annular flow, disturbance waves, entrainment, flow development

Procedia PDF Downloads 237
150 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder

Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida

Abstract:

Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).

Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities

Procedia PDF Downloads 100
149 Exploring Fluoroquinolone-Resistance Dynamics Using a Distinct in Vitro Fermentation Chicken Caeca Model

Authors: Bello Gonzalez T. D. J., Setten Van M., Essen Van A., Brouwer M., Veldman K. T.

Abstract:

Resistance to fluoroquinolones (FQ) has evolved increasingly over the years, posing a significant challenge for the treatment of human infections, particularly gastrointestinal tract infections caused by zoonotic bacteria transmitted through the food chain and environment. In broiler chickens, a relatively high proportion of FQ resistance has been observed in Escherichia coli indicator, Salmonella and Campylobacter isolates. We hypothesize that flumequine (Flu), used as a secondary choice for the treatment of poultry infections, could potentially be associated with a high proportion of FQ resistance. To evaluate this hypothesis, we used an in vitro fermentation chicken caeca model. Two continuous single-stage fermenters were used to simulate in real time the physiological conditions of the chicken caeca microbial content (temperature, pH, caecal content mixing, and anoxic environment). A pool of chicken caecal content containing FQ-resistant E. coli obtained from chickens at slaughter age was used as inoculum along with a spiked FQ-susceptible Campylobacter jejuni strain isolated from broilers. Flu was added to one of the fermenters (Flu-fermenter) every 24 hours for two days to evaluate the selection and maintenance of FQ resistance over time, while the other served as a control (C-Fermenter). The experiment duration was 5 days. Samples were collected at three different time points: before, during and after Flu administration. Serial dilutions were plated on Butzler culture media with and without Flu (8mg/L) and enrofloxacin (4mg/L) and on MacConkey culture media with and without Flu (4mg/L) and enrofloxacin (1mg/L) to determine the proportion of resistant strains over time. Positive cultures were identified by mass spectrometry and matrix-assisted laser desorption/ionization (MALDI). A subset of the obtained isolates were used for Whole Genome Sequencing analysis. Over time, E. coli exhibited positive growth in both fermenters, while C. jejuni growth was detected up to day 3. The proportion of Flu-resistant E. coli strains recovered remained consistent over time after antibiotic selective pressure, while in the C-fermenter, a decrease was observed at day 5; a similar pattern was observed in the enrofloxacin-resistant E. coli strains. This suggests that Flu might play a role in the selection and persistence of enrofloxacin resistance, compared to C-fermenter, where enrofloxacin-resistant E. coli strains appear at a later time. Furthermore, positive growth was detected from both fermenters only on Butzler plates without antibiotics. A subset of C. jejuni strains from the Flu-fermenter revealed that those strains were susceptible to ciprofloxacin (MIC < 0.12 μg/mL). A selection of E. coli strains from both fermenters revealed the presence of plasmid-mediated quinolone resistance (PMQR) (qnr-B19) in only one strain from the C-fermenter belonging to sequence type (ST) 48, and in all from Flu-fermenter belonged to ST189. Our results showed that Flu selective impact on PMQR-positive E. coli strains, while no effect was observed in C. jejuni. Maintenance of Flu-resistance was correlated with antibiotic selective pressure. Further studies into antibiotic resistance gene transfer among commensal and zoonotic bacteria in the chicken caeca content may help to elucidate the resistance spread mechanisms.

Keywords: fluoroquinolone-resistance, escherichia coli, campylobacter jejuni, in vitro model

Procedia PDF Downloads 40
148 Supply Chain Improvement of the Halal Goat Industry in the Autonomous Region in Muslim Mindanao

Authors: Josephine R. Migalbin

Abstract:

Halal is an Arabic word meaning "lawful" or "permitted". When it comes to food and consumables, Halal is the dietary standard of Muslims. The Autonomous Region in Muslim Mindanao (ARMM) has a comparative advantage when it comes to Halal Industry because it is the only Muslim region in the Philippines and the natural starting point for the establishment of a halal industry in the country. The region has identified goat production not only for domestic consumption but for export market. Goat production is one of its strengths due to cultural compatibility. There is a high demand for goats during Ramadhan and Eid ul-Adha. The study aimed to provide an overview of the ARMM Halal Goat Industry; to map out the specific supply chain of halal goat, and to analyze the performance of the halal goat supply chain in terms of efficiency, flexibility, and overall responsiveness. It also aimed to identify areas for improvement in the supply chain such as behavioural, institutional, and process to provide recommendations for improvement in the supply chain towards efficient and effective production and marketing of halal goats, subsequently improving the plight of the actors in the supply chain. Generally, the raising of goats is characterized by backyard production (92.02%). There are four interrelated factors affecting significantly the production of goats which are breeding prolificacy, prevalence of diseases, feed abundance and pre-weaning mortality rate. The institutional buyers are mostly traders, restaurants/eateries, supermarkets, and meat shops, among others. The municipalities of Midsayap and Pikit in another region and Parang are the major goat sources and the municipalities in ARMM among others. In addition to the major supply centers, Siquijor, an island province in the Visayas is becoming a key source of goats. Goats are usually gathered by traders/middlemen and brought to the public markets. Meat vendors purchase them directly from raisers, slaughtered and sold fresh in wet markets. It was observed that there is increased demand at 2%/year and that supply is not enough to meet the demand. Farm gate price is 2.04 USD to 2.11 USD/kg liveweight. Industry information is shared by three key participants - raisers, traders and buyers. All respondents reported that information is through personal built-upon past experiences and that there is no full disclosure of information among the key participants in the chain. The information flow in the industry is fragmented in nature such that no total industry picture exists. In the last five years, numerous local and foreign agencies had undertaken several initiatives for the development of the halal goat industry in ARMM. The major issues include productivity which is the greatest challenge, difficulties in accessing technical support channels and lack of market linkage and consolidation. To address the various issues and concerns of the various industry players, there is a need to intensify appropriate technology transfer through extension activities, improve marketing channels by grouping producers, strengthen veterinary services and provide capital windows to improve facilities and reduce logistics and transaction costs in the entire supply chain.

Keywords: autonomous region in Muslim Mindanao, halal, halal goat industry, supply chain improvement

Procedia PDF Downloads 316
147 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 185
146 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 50
145 Functional Traits and Agroecosystem Multifunctionality in Summer Cover Crop Mixtures and Monocultures

Authors: Etienne Herrick

Abstract:

As an economically and ecologically feasible method for farmers to introduce greater diversity into their crop rotations, cover cropping presents a valuable opportunity for improving the sustainability of food production. Planted in-between cash crop growing seasons, cover crops serve to enhance agroecosystem functioning, rather than being destined for sale or consumption. In fact, cover crops may hold the capacity to deliver multiple ecosystem functions or services simultaneously (multifunctionality). Building upon this line of research will not only benefit society at present, but also support its continued survival through its potential for restoring depleted soils and reducing the need for energy-intensive and harmful external inputs like fertilizers and pesticides. This study utilizes a trait-based approach to explore the influence of inter- and intra-specific interactions in summer cover crop mixtures and monocultures on functional trait expression and ecosystem services. Functional traits that enhance ecosystem services related to agricultural production include height, specific leaf area (SLA), root, shoot ratio, leaf C and N concentrations, and flowering phenology. Ecosystem services include biomass production, weed suppression, reduced N leaching, N recycling, and support of pollinators. Employing a trait-based approach may allow for the elucidation of mechanistic links between plant structure and resulting ecosystem service delivery. While relationships between some functional traits and the delivery of particular ecosystem services may be readily apparent through existing ecological knowledge (e.g. height positively correlating with weed suppression), this study will begin to quantify those relationships so as to gain further understanding of whether and how measurable variation in functional trait expression across cover crop mixtures and monocultures can serve as a reliable predictor of variation in the types and abundances of ecosystem services delivered. Six cover crop species, including legume, grass, and broadleaf functional types, were selected for growth in six mixtures and their component monocultures based upon the principle of trait complementarity. The tricultures (three-way mixtures) are comprised of a legume, grass, and broadleaf species, and include cowpea/sudex/buckwheat, sunnhemp/sudex/buckwheat, and chickling vetch/oat/buckwheat combinations; the dicultures contain the same legume and grass combinations as above, without the buckwheat broadleaf. By combining species with expectedly complimentary traits (for example, legumes are N suppliers and grasses are N acquirers, creating a nutrient cycling loop) the cover crop mixtures may elicit a broader range of ecosystem services than that provided by a monoculture, though trade-offs could exist. Collecting functional trait data will enable the investigation of the types of interactions driving these ecosystem service outcomes. It also allows for generalizability across a broader range of species than just those selected for this study, which may aid in informing further research efforts exploring species and ecosystem functioning, as well as on-farm management decisions.

Keywords: agroecology, cover crops, functional traits, multifunctionality, trait complementarity

Procedia PDF Downloads 240
144 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 224
143 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design

Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell

Abstract:

The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.

Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity

Procedia PDF Downloads 131
142 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 152