Search results for: personality and values
127 Effect of the Incorporation of Modified Starch on the Physicochemical Properties and Consumer Acceptance of Puff Pastry
Authors: Alejandra Castillo-Arias, Santiago Amézquita-Murcia, Golber Carvajal-Lavi, Carlos M. Zuluaga-Domínguez
Abstract:
The intricate relationship between health and nutrition has driven the food industry to seek healthier and more sustainable alternatives. A key strategy currently employed is the reduction of saturated fats and the incorporation of ingredients that align with new consumer trends. Modified starch, a polysaccharide widely used in baking, also serves as a functional ingredient to boost dietary fiber content. However, its use in puff pastry remains challenging due to the technological difficulties in achieving a buttery pastry with the necessary strength to create thin, flaky layers. This study explored the potential of incorporating modified starch into puff pastry formulations. To evaluate the physicochemical properties of wheat flour mixed with modified starch, five different flour samples were prepared: T1, T2, T3, and T4, containing 10g, 20g, 30g, and 40g of modified starch per 100 g mixture, respectively, alongside a control sample (C) with no added starch. The analysis focused on various physicochemical indices, including the Water Absorption Index (WAI), Water Solubility Index (WSI), Swelling Power (SP), and Water Retention Capacity (WRC). The puff pastry was further characterized by color measurement and sensory analysis. For the preparation of the puff pastry dough, the flour, modified starch, and salt were mixed, followed by the addition of water until a homogenous dough was achieved. The margarine was later incorporated into the dough, which was folded and rolled multiple times to create the characteristic layers of puff pastry. The dough was then cut into equal pieces, baked at 170°C, and allowed to cool. The results indicated that the addition of modified starch did not significantly alter the specific volume or texture of the puff pastries, as reflected by the stable WAI and SP values across the samples. However, the WRC increased with higher starch content, highlighting the hydrophilic nature of the modified starch, which necessitated additional water during dough preparation. Color analysis revealed significant variations in the L* (lightness) and a* (red-green) parameters, with no consistent relationship between the modified starch treatments and the control. However, the b* (yellow-blue) parameter showed a strong correlation across most samples, except for treatment T3. Thus, modified starch affected the a* component of the CIELAB color spectrum, influencing the reddish hue of the puff pastries. Variations in baking time due to increased water content in the dough likely contributed to differences in lightness among the samples. Sensory analysis revealed that consumers preferred the sample with a 20% starch substitution (T2), which was rated similarly to the control in terms of texture. However, treatment T3 exhibited unusual behavior in texture analysis, and the color analysis showed that treatment T1 most closely resembled the control, indicating that starch addition is most noticeable to consumers in the visual aspect of the product. In conclusion, while the modified starch successfully maintained the desired texture and internal structure of puff pastry, its impact on water retention and color requires careful consideration in product formulation. This study underscores the importance of balancing product quality with consumer expectations when incorporating modified starches in baked goods.Keywords: consumer preferences, modified starch, physicochemical properties, puff pastry
Procedia PDF Downloads 27126 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty
Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)
Abstract:
In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator
Procedia PDF Downloads 88125 Common Space Production as a Solution to the Affordable Housing Problem: Its Relationship with the Squating Process in Turkey
Authors: Gözde Arzu Sarıcan
Abstract:
Contemporary urbanization processes and spatial transformations are intensely debated across various fields of social sciences. One prominent concept in these discussions is "common spaces." Common spaces offer a critical theoretical framework, particularly for addressing the social and economic inequalities brought about by urbanization. This study examines the processes of commoning and their impacts through the lens of squatter neighborhoods in Turkey, emphasizing the importance of affordable housing. It focuses on the role and significance of these neighborhoods in the formation of common spaces, analyzing the collective actions and resistance strategies of residents. This process, which began with the construction of shelters to meet the shelter needs of low-income households migrating from rural to urban areas, has turned into low-quality squatter settlements over time. For low-income households lacking the economic power to rent or buy homes in the city, these areas provided an affordable housing solution. Squatter neighborhoods reflect the efforts of local communities to protect and develop their communal living spaces through collective actions and resistance strategies. This collective creation process involves the appropriation of occupied land as a common resource through the rules established by the commons. Organized occupations subdivide these lands, shaped through collective creation processes. For the squatter communities striving for economic and social adaptation, these areas serve as buffer zones for urban integration. In squatter neighborhoods, bonds of friendship, kinship, and compatriotism are strong, playing a significant role in the creation and dissemination of collective knowledge. Squatter areas can be described as common spaces that emerge out of necessity for low-income and marginalized groups. The design and construction of housing in squatter neighborhoods are shaped by the collective participation and skills of the residents. Streets are formed through collective decision-making and labor. Over time, the demands for housing are communicated to local authorities, enhancing the potential for commoning. Common spaces are shaped by collective needs and demands, appropriated, and transformed into potential new spaces. Common spaces are continually redefined and recreated. In this context, affordable housing becomes an essential aspect of these common spaces, providing a foundation for social and economic stability. This study evaluates the processes of commoning and their effects through the lens of squatter neighborhoods in Turkey. Communities living in squatter neighborhoods have managed to create and protect communal living spaces, especially in situations where official authorities have been inadequate. Common spaces are built on values such as solidarity, cooperation, and collective resistance. In urban planning and policy development processes, it is crucial to consider the concept of common spaces. Policies that support the collective efforts and resistance strategies of communities can contribute to more just and sustainable living conditions in urban areas. In this context, the concept of common spaces is considered an important tool in the fight against urban inequalities and in the expression and defense mechanisms of communities. By emphasizing the importance of affordable housing within these spaces, this study highlights the critical role of common spaces in addressing urban social and economic challenges.Keywords: affordable housing, common space, squating process, turkey
Procedia PDF Downloads 32124 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 68123 Coastal Cliff Protection in Beit Yanai, Israel: Examination of Alternatives and Public Preference Analysis
Authors: Tzipi Eshet
Abstract:
The primary objectives of this work are the examination of public preferences and attributed importance to different characteristics of coastal cliff protection alternatives, and drawing conclusions about the applicable alternative in Beit-Yanai beach. Erosion of coastal cliffs is a natural phenomenon that occurs in many places in the world. This creates problems along the coastlines, which are densely populated areas with highly developed economic activity. In recent years, various aspects of the aeolianite cliffs along the Israeli coast have been studied extensively. There is a consensus among researchers regarding a general trend of cliff retreat. This affects civilian infrastructure, wildlife habitats and heritage values, as well as Increases the risk to human life. The Israeli government, committed to the integrated coastal zones management approach, decided on a policy and guidelines to deal with cliff erosion, which includes establishing physical protection on land and in the sea, sand nourishment and runoff drainage. Physical protection solutions to reduce the rate of retreat of the cliffs are considerably important both for planning authorities and visitors to the beach. Direct costs of different protection alternatives, as well as external costs and benefits, may vary, thus affecting consumer preferences. Planning and execution of sustainable coastal cliff protection alternatives must take into account the different characteristics and their impact on aspects of economics, environment and leisure. The rocky shore of Beit-Yanai Beach was chosen as a case study to examine the nature of the influence of various protective solutions on consumer preferences. This beach is located in the center of Israel's coastline, and acts as a focus of attraction for recreation, land and sea sports, and educational activities as well. If no action will be taken, cliff retreat will continue. A survey was conducted to reveal the importance of coastal protection alternatives characteristics and the visual preferences to visitors at beach Beit-Yanai and residents living on the cliff (N=287). Preferences and willingness-to-pay were explored using Contingent-Ranking and Choice-Experiments techniques. Results show that visitors’ and residents’ willingness-to-pay for coastal cliff protection alternatives is affected both by financial and environmental aspects, as well as leisure. They prefer coastal cliff protection alternatives that are not visible and do not need constant maintenance, do not affect the quality of seawater or the habitats of wildlife and do not lower the security level of the swimmers. No significant difference was found comparing willingness-to-pay among local and non-local users. Additionally, they mostly prefer a protection solution which is integrated in the coastal landscape and maintains the natural appearance of the beach. Of the possible protection alternatives proposed for the protection of the cliff in Beit Yanai beach are two techniques that meet public preferences: rock revetments and submerged detached breakwaters. Results indicate that the visiting public prefer the implementation of these protection alternatives and will be willing to pay for them. Future actions to reduce retreat rate in Beit-Yanai have to consider implications on the economic, environmental and social conditions, along with weighting public interest against the interest of the individual.Keywords: contingent-ranking, choice-experiments, coastal cliff protection, erosion of coastal cliffs, environment
Procedia PDF Downloads 306122 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 134121 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 336120 Plasma Levels of Collagen Triple Helix Repeat Containing 1 (CTHRC1) as a Potential Biomarker in Interstitial Lung Disease
Authors: Rijnbout-St.James Willem, Lindner Volkhard, Scholand Mary Beth, Ashton M. Tillett, Di Gennaro Michael Jude, Smith Silvia Enrica
Abstract:
Introduction: Fibrosing lung diseases are characterized by changes in the lung interstitium and are classified based on etiology: 1) environmental/exposure-related, 2) autoimmune-related, 3) sarcoidosis, 4) interstitial pneumonia, and 4) idiopathic. Among interstitial lung diseases (ILD) idiopathic forms, idiopathic pulmonary fibrosis (IPF) is the most severe. Pathogenesis of IPF is characterized by an increased presence of proinflammatory mediators, resulting in alveolar injury, where injury to alveolar epithelium precipitates an increase in collagen deposition, subsequently thickening the alveolar septum and decreasing gas exchange. Identifying biomarkers implicated in the pathogenesis of lung fibrosis is key to developing new therapies and improving the efficacy of existing therapies. The transforming growth factor-beta (TGF-B1), a mediator of tissue repair associated with WNT5A signaling, is partially responsible for fibroblast proliferation in ILD and is the target of Pirfenidone, one of the antifibrotic therapies used for patients with IPF. Canonical TGF-B signaling is mediated by the proteins SMAD 2/3, which are, in turn, indirectly regulated by Collagen Triple Helix Repeat Containing 1 (CTHRC1). In this study, we tested the following hypotheses: 1) CTHRC1 is more elevated in the ILD cohort compared to unaffected controls, and 2) CTHRC1 is differently expressed among ILD types. Material and Methods: CTHRC1 levels were measured by ELISA in 171 plasma samples from the deidentified University of Utah ILD cohort. Data represent a cohort of 131 ILD-affected participants and 40 unaffected controls. CTHRC1 samples were categorized by a pulmonologist based on affectation status and disease subtypes: IPF (n = 45), sarcoidosis (4), nonspecific interstitial pneumonia (16), hypersensitivity pneumonitis (n = 7), interstitial pneumonia (n=13), autoimmune (n = 15), other ILD - a category that includes undifferentiated ILD diagnoses (n = 31), and unaffected controls (n = 40). We conducted a single-factor ANOVA of plasma CTHRC1 levels to test whether CTHRC1 variance among affected and non-affected participants is statistically significantly different. In-silico analysis was performed with Ingenuity Pathway Analysis® to characterize the role of CTHRC1 in the pathway of lung fibrosis. Results: Statistical analyses of CTHRC1 in plasma samples indicate that the average CTHRC1 level is significantly higher in ILD-affected participants than controls, with the autoimmune ILD being higher than other ILD types, thus supporting our hypotheses. In-silico analyses show that CTHRC1 indirectly activates and phosphorylates SMAD3, which in turn cross-regulates TGF-B1. CTHRC1 also may regulate the expression and transcription of TGFB-1 via WNT5A and its regulatory relationship with CTNNB1. Conclusion: In-silico pathway analyses demonstrate that CTHRC1 may be an important biomarker in ILD. Analysis of plasma samples indicates that CTHRC1 expression is positively associated with ILD affectation, with autoimmune ILD having the highest average CTHRC1 values. While characterizing CTHRC1 levels in plasma can help to differentiate among ILD types and predict response to Pirfenidone, the extent to which plasma CTHRC1 level is a function of ILD severity or chronicity is unknown.Keywords: interstitial lung disease, CTHRC1, idiopathic pulmonary fibrosis, pathway analyses
Procedia PDF Downloads 191119 Chemical, Biochemical and Sensory Evaluation of a Quadrimix Complementary Food Developed from Sorghum, Groundnut, Crayfish and Pawpaw Blends
Authors: Ogechi Nzeagwu, Assumpta Osuagwu, Charlse Nkwoala
Abstract:
Malnutrition in infants due to poverty, poor feeding practices, and high cost of commercial complementary foods among others is a concern in developing countries. The study evaluated the proximate, vitamin and mineral compositions, antinutrients and functional properties, biochemical, haematological and sensory evaluation of complementary food made from sorghum, groundnut, crayfish and paw-paw flour blends using standard procedures. The blends were formulated on protein requirement of infants (18 g/day) using Nutrisurvey linear programming software in ratio of sorghum(S), groundnut(G), crayfish(C) and pawpaw(P) flours as 50:25:10:15(SGCP1), 60:20:10:10 (SGCP2), 60:15:15:10 (SGCP3) and 60:10:20:10 (SGCP4). Plain-pap (fermented maize flour)(TCF) and cerelac (commercial complementary food) served as basal and control diets. Thirty weanling male albino rats aged 28-35 days weighing 33-60 g were purchased and used for the study. The rats after acclimatization were fed with gruel produced with the experimental diets and the control with water ad libitum daily for 35days. Effect of the blends on lipid profile, blood glucose, haematological (RBC, HB, PCV, MCV), liver and kidney function and weight gain of the rats were assessed. Acceptability of the gruel was conducted at the end of rat feeding on forty mothers of infants’ ≥ 6 months who gave their informed consent to participate using a 9 point hedonic scale. Data was analyzed for means and standard deviation, analysis of variance and means were separated using Duncan multiple range test and significance judged at 0.05, all using SPSS version 22.0. The results indicated that crude protein, fibre, ash and carbohydrate of the formulated diets were either comparable or higher than values in cerelac. The formulated diets (SGCP1- SGCP4) were significantly (P>0.05) higher in vitamin A and thiamin compared to cerelac. The iron content of the formulated diets SGCP1- SGCP4 (4.23-6.36 mg/100) were within the recommended iron intake of infants (0.55 mg/day). Phytate (1.56-2.55 mg/100g) and oxalate (0.23-0.35 mg/100g) contents of the formulated diets were within the permissible limits of 0-5%. In functional properties, bulk density, swelling index, % dispersibility and water absorption capacity significantly (P<0.05) increased and compared favourably with cerelac. The essential amino acids of the formulated blends were within the amino acid profile of the FAO/WHO/UNU reference protein for children 0.5 -2 years of age. Urea concentration of rats fed with SGCP1-SGCP4 (19.48 mmol/L),(23.76 mmol/L),(24.07 mmol/L),(23.65 mmol/L) respectively was significantly higher than that of rat fed cerelac (16.98 mmol/L); however, plain pap had the least value (9.15 mmol/L). Rats fed with SGCP1-SGCP4 (116 mg/dl), (119 mg/dl), (115 mg/dl), (117 mg/dl) respectively had significantly higher glucose levels those fed with cerelac (108 mg/dl). Liver function parameters (AST, ALP and ALT), lipid profile (triglyceride, HDL, LDL, VLDL) and hematological parameters of rats fed with formulated diets were within normal range. Rats fed SGCP1 gained more weight (90.45 g) than other rats fed with SGCP2-SGCP4 (71.65 g, 79.76 g, 75.68 g), TCF (20.13 g) and cerelac (59.06 g). In all the sensory attributes, the control was preferred with respect to the formulated diets. The formulated diets were generally adequate and may likely have potentials to meet nutrient requirements of infants as complementary food.Keywords: biochemical, chemical evaluation, complementary food, quadrimix
Procedia PDF Downloads 170118 Start with the Art: Early Results from a Study of Arts-Integrated Instruction for Young Children
Authors: Juliane Toce, Steven Holochwost
Abstract:
A substantial and growing literature has demonstrated that arts education benefits young children’s socioemotional and cognitive development. Less is known about the capacity of arts-integrated instruction to yield benefits to similar domains, particularly among demographically and socioeconomically diverse groups of young children. However, the small literature on this topic suggests that arts-integrated instruction may foster young children’s socioemotional and cognitive development by presenting opportunities to 1) engage in instructional content in diverse ways, 2) experience and regulate strong emotions, 3) experience growth-oriented feedback, and 4) engage in collaborative work with peers. Start with the Art is a new program of arts-integrated instruction currently being implemented in four schools in a school district that serves students from a diverse range of backgrounds. The program employs a co-teaching model in which teaching artists and classroom teachers engage in collaborative lesson planning and instruction over the course of the academic year and is currently the focus of an impact study featuring a randomized-control design, as well as an implementation study, both of which are funded through an Educational Innovation and Research grant from the United States Department of Education. The paper will present the early results from the Start with the Art implementation study. These results will provide an overview of the extent to which the program was implemented in accordance with design, with a particular emphasis on the degree to which the four opportunities enumerated above (e.g., opportunities to engage in instructional content in diverse ways) were presented to students. There will be a review key factors that may influence the fidelity of implementation, including classroom teachers’ reception of the program and the extent to which extant conditions in the classroom (e.g., the overall level of classroom organization) may have impacted implementation fidelity. With the explicit purpose of creating a program that values and meets the needs of the teachers and students, Start with the Art incorporates the feedback from individuals participating in the intervention. Tracing its trajectory from inception to ongoing development and examining the adaptive changes made in response to teachers' transformative experiences in the post-pandemic classroom, Start with the Art continues to solicit input from experts in integrating artistic content into core curricula within educational settings catering to students from under-represented backgrounds in the arts. Leveraging the input from this rich consortium of experts has allowed for a comprehensive evaluation of the program’s implementation. The early findings derived from the implementation study emphasize the potential of arts-integrated instruction to incorporate restorative practices. Such practices serve as a crucial support system for both students and educators, providing avenues for children to express themselves, heal emotionally, and foster social development, while empowering teachers to create more empathetic, inclusive, and supportive learning environments. This all-encompassing analysis spotlights Start with the Art’s adaptability to any learning environment through the program’s effectiveness, resilience, and its capacity to transform - through art - the classroom experience within the ever-evolving landscape of education.Keywords: arts-integration, social emotional learning, diverse learners, co-teaching, teaching artists, post-pandemic teaching
Procedia PDF Downloads 62117 Environmental Impacts of Point and Non-Point Source Pollution in Krishnagiri Reservoir: A Case Study in South India
Authors: N. K. Ambujam, V. Sudha
Abstract:
Reservoirs are being contaminated all around the world with point source and Non-Point Source (NPS) pollution. The most common NPS pollutants are sediments and nutrients. Krishnagiri Reservoir (KR) has been chosen for the present case study, which is located in the tropical semi-arid climatic zone of Tamil Nadu, South India. It is the main source of surface water in Krishnagiri district to meet the freshwater demands. The reservoir has lost about 40% of its water holding capacity due to sedimentation over the period of 50 years. Hence, from the research and management perspective, there is a need for a sound knowledge on the spatial and seasonal variations of KR water quality. The present study encompasses the specific objectives as (i) to investigate the longitudinal heterogeneity and seasonal variations of physicochemical parameters, nutrients and biological characteristics of KR water and (ii) to examine the extent of degradation of water quality in KR. 15 sampling points were identified by uniform stratified method and a systematic monthly sampling strategy was selected due to high dynamic nature in its hydrological characteristics. The physicochemical parameters, major ions, nutrients and Chlorophyll a (Chl a) were analysed. Trophic status of KR was classified by using Carlson's Trophic State Index (TSI). All statistical analyses were performed by using Statistical Package for Social Sciences programme, version-16.0. Spatial maps were prepared for Chl a using Arc GIS. Observations in KR pointed out that electrical conductivity and major ions are highly variable factors as it receives inflow from the catchment with different land use activities. The study of major ions in KR exhibited different trends in their values and it could be concluded that as the monsoon progresses the major ions in the water decreases or water quality stabilizes. The inflow point of KR showed comparatively higher concentration of nutrients including nitrate, soluble reactive phosphorus (SRP), total phosphors (TP), total suspended phosphorus (TSP) and total dissolved phosphorus (TDP) during monsoon seasons. This evidently showed the input of significant amount of nutrients from the catchment side through agricultural runoff. High concentration of TDP and TSP at the lacustrine zone of the reservoir during summer season evidently revealed that there was a significant release of phosphorus from the bottom sediments. Carlson’s TSI of KR ranged between 81 and 92 during northeast monsoon and summer seasons. High and permanent Cyanobacterial bloom in KR could be mainly due to the internal loading of phosphorus from the bottom sediments. According to Carlson’s TSI classification Krishnagiri reservoir was ranked in the hyper-eutrophic category. This study provides necessary basic data on the spatio-temporal variations of water quality in KR and also proves the impact of point and NPS pollution from the catchment area. High TSI warrants a greater threat for the recovery of internal P loading and hyper-eutrophic condition of KR. Several expensive internal measures for the reduction of internal loading of P were introduced by many scientists. However, the outcome of the present research suggests for the innovative algae harvesting technique for the removal of sediment nutrients.Keywords: NPS pollution, nutrients, hyper-eutrophication, krishnagiri reservoir
Procedia PDF Downloads 324116 Optical-Based Lane-Assist System for Rowing Boats
Authors: Stephen Tullis, M. David DiDonato, Hong Sung Park
Abstract:
Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge.Keywords: auto-pilot, lane-assist, marine, optical, rowing
Procedia PDF Downloads 132115 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 159114 Influence of Dryer Autumn Conditions on Weed Control Based on Soil Active Herbicides
Authors: Juergen Junk, Franz Ronellenfitsch, Michael Eickermann
Abstract:
An appropriate weed management in autumn is a prerequisite for an economically successful harvest in the following year. In Luxembourg oilseed rape, wheat and barley is sown from August until October, accompanied by a chemical weed control with soil active herbicides, depending on the state of the weeds and the meteorological conditions. Based on regular ground and surface water-analysis, high levels of contamination by transformation products of respective herbicide compounds have been found in Luxembourg. The most ideal conditions for incorporating soil active herbicides are single rain events. Weed control may be reduced if application is made when weeds are under drought stress or if repeated light rain events followed by dry spells, because the herbicides tend to bind tightly to the soil particles. These effects have been frequently reported for Luxembourg throughout the last years. In the framework of a multisite long-term field experiment (EFFO) weed monitoring, plants observations and corresponding meteorological measurements were conducted. Long-term time series (1947-2016) from the SYNOP station Findel-Airport (WMO ID = 06590) showed a decrease in the number of days with precipitation. As the total precipitation amount has not significantly changed, this indicates a trend towards rain events with higher intensity. All analyses are based on decades (10-day periods) for September and October of each individual year. To assess the future meteorological conditions for Luxembourg, two different approaches were applied. First, multi-model ensembles from the CORDEX experiments (spatial resolution ~12.5 km; transient projections until 2100) were analysed for two different Representative Concentration Pathways (RCP8.5 and RCP4.5), covering the time span from 2005 until 2100. The multi-model ensemble approach allows for the quantification of the uncertainties and also to assess the differences between the two emission scenarios. Second, to assess smaller scale differences within the country a high resolution model projection using the COSMO-LM model was used (spatial resolution 1.3 km). To account for the higher computational demands, caused by the increased spatial resolution, only 10-year time slices have been simulated (reference period 1991-2000; near future 2041-2050 and far future 2091-2100). Statistically significant trends towards higher air temperatures, +1.6 K for September (+5.3 K far future) and +1.3 K for October (+4.3 K), were predicted for the near future compared to the reference period. Precipitation simultaneously decreased by 9.4 mm (September) and 5.0 mm (October) for the near future and -49 mm (September) and -10 mm (October) in the far future. Beside the monthly values also decades were analyzed for the two future time periods of the CLM model. For all decades of September and October the number of days with precipitation decreased for the projected near and far future. Changes in meteorological variables such as air temperature and precipitation did already induce transformations in weed societies (composition, late-emerging etc.) of arable ecosystems in Europe. Therefore, adaptations of agronomic practices as well as effective weed control strategies must be developed to maintain crop yield.Keywords: CORDEX projections, dry spells, ensembles, weed management
Procedia PDF Downloads 235113 Illness-Related PTSD Among Type 1 Diabetes Patients
Authors: Omer Zvi Shaked, Amir Tirosh
Abstract:
Type 1 Diabetes (T1DM) is an incurable chronic illness with no known preventive measures. Excess to insulin therapy can lead to hypoglycemia with neuro-glycogenic symptoms such as shakiness, nausea, sweating, irritability, fatigue, excessive thirst or hunger, weakness, seizure, and coma. Severe Hypoglycemia (SH) is also considered a most aversive event since it may put patients at risk for injury and death, which matches the criteria of a traumatic event. SH has a ranging prevalence of 20%, which makes it a primary medical Issue. One of the results of SH is an intense emotional fear reaction resembling the form of post-traumatic stress symptoms (PTS), causing many patients to avoid insulin therapy and social activities in order to avoid the possibility of hypoglycemia. As a result, they are at risk for irreversible health deterioration and medical complications. Fear of Hypoglycemia (FOH) is, therefore, a major disturbance for T1DM patients. FOH differs from prevalent post-traumatic stress reactions to other forms of traumatic events since the threat to life continuously exists in the patient's body. That is, it is highly probable that orthodox interventions may not be sufficient for helping patients after SH to regain healthy social function and proper medical treatment. Accordingly, the current presentation will demonstrate the results of a study conducted among T1DM patients after SH. The study was designed in two stages. First, a preliminary qualitative phenomenological study among ten patients after SH was conducted. Analysis revealed that after SH, patients confuse between stress symptoms and Hypoglycemia symptoms, divide life before and after the event, report a constant sense of fear, a loss of freedom, a significant decrease in social functioning, a catastrophic thinking pattern, a dichotomous split between the self and the body, and internalization of illness identity, a loss of internal locus of control, a damaged self-representation, and severe loneliness for never being understood by others. The second stage was a two steps study of intervention among five patients after SH. The first part of the intervention included three months of therapeutic 3rd wave CBT therapy. The contents of the therapeutic process were: acceptance of fear and tolerance to stress; cognitive de-fusion combined with emotional self-regulation; the adoption of an active position relying on personal values; and self-compassion. Then, the intervention included a one-week practical real-time 24/7 support by trained medical personnel, alongside a gradual exposure to increased insulin therapy in a protected environment. The results of the intervention are a decrease in stress symptoms, increased social functioning, increased well-being, and decreased avoidance of medical treatment. The presentation will discuss the unique emotional state of T1DM patients after SH. Then, the presentation will discuss the effectiveness of the intervention for patients with chronic conditions after a traumatic event. The presentation will make evident the unique situation of illness-related PTSD. The presentation will also demonstrate the requirement for multi-professional collaboration between social work and medical care for populations with chronic medical conditions. Limitations of the study and recommendations for further research will be discussed.Keywords: type 1 diabetes, chronic illness, post-traumatic stress, illness-related PTSD
Procedia PDF Downloads 177112 An Investigation about the Health-Promoting Lifestyle of 1389 Emergency Nurses in China
Authors: Lei Ye, Min Liu, Yong-Li Gao, Jun Zhang
Abstract:
Purpose: The aims of the study are to investigate the status of health-promoting lifestyle and to compare the healthy lifestyle of emergency nurses in different levels of hospitals in Sichuan province, China. The investigation is mainly about the health-promoting lifestyle, including spiritual growth, health responsibility, physical activity, nutrition, interpersonal relations, stress management. Then the factors were analyzed influencing the health-promoting lifestyle of emergency nurses in hospitals of Sichuan province in order to find the relevant models to provide reference evidence for intervention. Study Design: A cross-sectional research method was adopted. Stratified cluster sampling, based on geographical location, was used to select the health facilities of 1389 emergency nurses in 54 hospitals from Sichuan province in China. Method: The 52-item, six-factor structure Health-Promoting Lifestyle Profile II (HPLP- II) instrument was used to explore participants’ self-reported health-promoting behaviors and measure the dimensions of health responsibility, physical activity, nutrition, interpersonal relations, spiritual growth, and stress management. Demographic characteristics, education, work duration, emergency nursing work duration and self-rated health status were documented. Analysis: Data were analyzed through SPSS software ver. 17.0. Frequency, percentage, mean ± standard deviation were used to describe the general information, while the Nonparametric Test was used to compare the constituent ratio of general data of different hospitals. One-way ANOVA was used to compare the scores of health-promoting lifestyle in different levels hospital. A multiple linear regression model was established. P values which were less than 0.05 determined statistical significance in all analyses. Result: The survey showed that the total score of health-promoting lifestyle of nurses at emergency departments in Sichuan Province was 120.49 ± 21.280. The relevant dimensions are ranked by scores in descending order: interpersonal relations, nutrition, health responsibility, physical activity, stress management, spiritual growth. The total scores of the three-A hospital were the highest (121.63 ± 0.724), followed by the senior class hospital (119.7 ± 1.362) and three-B hospital (117.80 ± 1.255). The difference was statistically significant (P=0.024). The general data of nurses was used as the independent variable which includes age, gender, marital status, living conditions, nursing income, hospital level, Length of Service in nursing, Length of Service in emergency, Professional Title, education background, and the average number of night shifts. The total score of health-promoting lifestyle was used as dependent variable; Multiple linear regression analysis method was adopted to establish the regression model. The regression equation F = 20.728, R2 = 0.061, P < 0.05, the age, gender, nursing income, turnover intention and status of coping stress affect the health-promoting lifestyle of nurses in emergency department, the result was statistically significant (P < 0.05 ). Conclusion: The results of the investigation indicate that it will help to develop health promoting interventions for emergency nurses in all levels of hospital in Sichuan Province through further research. Managers need to pay more attention to emergency nurses’ exercise, stress management, self-realization, and conduct intervention in nurse training programs.Keywords: emergency nurse, health-promoting lifestyle profile II, health behaviors, lifestyle
Procedia PDF Downloads 282111 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels
Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe
Abstract:
The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa
Procedia PDF Downloads 231110 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter
Authors: Wesley Preßler, Lucie Schmidt
Abstract:
Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.Keywords: digital transformation, interdisciplinary, maturity model, neighborhood
Procedia PDF Downloads 77109 Crack Size and Moisture Issues in Thermally Modified vs. Native Norway Spruce Window Frames: A Hygrothermal Simulation Study
Authors: Gregor Vidmar, Rožle Repič, Boštjan Lesar, Miha Humar
Abstract:
The study investigates the impact of cracks in surface coatings on moisture content (MC) and related fungal growth in window frames made of thermally modified (TM) and native Norway spruce using hygrothermal simulations for Ljubljana, Slovenia. Comprehensive validation against field test data confirmed the numerical model's predictions, demonstrating similar trends in MC changes over the investigated four years. Various established mould growth models (isopleth, VTT, bio hygrothermal) did not appropriately reflect differences between the spruce types because they do not consider material moisture content, leading to the main conclusion that TM spruce is more resistant to moisture-related issues. Wood's MC influences fungal decomposition, typically occurring above 25% - 30% MC, with some fungi growing at lower MC under conducive conditions. Surface coatings cannot wholly prevent water penetration, which becomes significant when the coating is damaged. This study investigates the detrimental effects of surface coating cracks on wood moisture absorption, comparing TM spruce and native spruce window frames. Simulations were conducted for undamaged and damaged coatings (from 1 mm to 9 mm wide cracks) on window profiles as well as for uncoated profiles. Sorption curves were also measured up to 95% of the relative humidity. MC was measured in the frames exposed to actual climatic conditions and compared to simulated data for model validation. The study utilizes a simplified model of the bottom frame part due to convergence issues with simulations of the whole frame. TM spruce showed about 4% lower MC content compared to native spruce. Simulations showed that a 3 mm wide crack in native spruce coatings for the north orientation poses significant moisture risks, while a 9 mm wide crack in TM spruce coatings remains acceptable furthermore in the case of uncoated TM spruce could be acceptable. In addition, it seems that large enough cracks may cause even worse moisture dynamics compared to uncoated native spruce profiles. The absorption curve comes out to be the far most influential parameter, and the next one is density. Existing mould growth models need to be upgraded to reflect wood material differences accurately. Due to the lower sorption curve of TM spruce, in reality, higher RH values are obtained under the same boundary conditions, which implies a more critical situation according to these mould growth models. Still, it does not reflect the difference in materials, especially under external exposure conditions. Even if different substrate categories in the isopleth and bio-hygrothermal model or different sensitivity material classes for standard and TM wood are used, it does not necessarily change the expected trends; thus, models with MC being the inherent part of the models should be introduced. Orientation plays a crucial role in moisture dynamics. Results show that for similar moisture dynamics, for Norway spruce, the crack could be about 2 mm wider on the south than on the north side. In contrast, for TM spruce, orientation isn't as important, compared to other material properties. The study confirms the enhanced suitability of TM spruce for window frames in terms of moisture resistance and crack tolerance in surface coatings.Keywords: hygrothermal simulations, mould growth, surface coating, thermally modified wood, window frame
Procedia PDF Downloads 35108 Azolla Pinnata as Promising Source for Animal Feed in India: An Experimental Study to Evaluate the Nutrient Enhancement Result of Feed
Authors: Roshni Raha, Karthikeyan S.
Abstract:
The world's largest livestock population resides in India. Existing strategies must be modified to increase the production of livestock and their by-products in order to meet the demands of the growing human population. Even though India leads the world in both milk production and the number of cows, average production is not very healthy and productive. This may be due to the animals' poor nutrition caused by a chronic under-availability of high-quality fodder and feed. This article explores Azolla pinnata to be a promising source to produce high-quality unconventional feed and fodder for effective livestock production and good quality breeding in India. This article is an exploratory study using a literature survey and experimentation analysis. In the realm of agri-biotechnology, azolla sp gained attention for helping farmers achieve sustainability, having minimal land requirements, and serving as a feed element that doesn't compete with human food sources. It has high methionine content, which is a good source of protein. It can be easily digested as the lignin content is low. It has high antioxidants and vitamins like beta carotene, vitamin A, and vitamin B12. Using this concept, the paper aims to investigate and develop a model of using azolla plants as a novel, high-potential feed source to combat the problems of low production and poor quality of animals in India. A representative sample of animal feed is collected where azolla is added. The sample is ground into a fine powder using mortar. PITC (phenylisothiocyanate) is added to derivatize the amino acids. The sample is analyzed using HPLC (High-Performance Liquid Chromatography) to measure the amino acids and monitor the protein content of the sample feed. The amino acid measurements from HPLC are converted to milligrams per gram of protein using the method of amino acid profiling via a set of calculations. The amino acid profile data is then obtained to validate the proximate results of nutrient enhancement of the composition of azolla in the sample. Based on the proximate composition of azolla meal, the enhancement results shown were higher compared to the standard values of normal fodder supplements indicating the feed to be much richer and denser in nutrient supply. Thus azolla fed sample proved to be a promising source for animal fodder. This would in turn lead to higher production and a good breed of animals that would help to meet the economic demands of the growing Indian population. Azolla plants have no side effects and can be considered as safe and effective to be immersed in the animal feed. One area of future research could begin with the upstream scaling strategy of azolla plants in India. This could involve introducing several bioreactor types for its commercial production. Since azolla sp has been proved in this paper as a promising source for high quality animal feed and fodder, large scale production of azolla plants will help to make the process much quicker, more efficient and easily accessible. Labor expenses will also be reduced by employing bioreactors for large-scale manufacturing.Keywords: azolla, fodder, nutrient, protein
Procedia PDF Downloads 55107 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 276106 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves
Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare
Abstract:
The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve
Procedia PDF Downloads 43105 Prospective Analytical Cohort Study to Investigate a Physically Active Classroom-Based Wellness Programme to Propose a Mechanism to Meet Societal Need for Increased Physical Activity Participation and Positive Subjective Well-Being amongst Adolescent
Authors: Aileen O'loughlin
Abstract:
‘Is Everybody Going WeLL?’ (IEGW?) is a 33-hour classroom-based initiative created to a) explore values and how they impact on well-being, b) encourage adolescents to connect with their community, and c) provide them with the education to encourage and maintain a lifetime love of physical activity (PA) to ensure beneficial effects on their personal well-being. This initiative is also aimed at achieving sustainable education and aligning with the United Nation’s Sustainable Development Goals numbers 3 and 4. The classroom is a unique setting in which adolescents’ PA participation can be positively influenced through fun PA policies and initiatives. The primary purpose of this research is to evaluate a range of psychosocial and PA outcomes following the 33-hour education programme. This research examined the impact of a PA and well-being programme consisting of either a 60minute or 80minute class, depending on the timetable structure of the school, delivered once a week. Participant outcomes were measured using validated questionnaires regarding Self-esteem, Mental Health Literacy (MHL) and Daily Physical Activity Participation. These questionnaires were administered at three separate time points; baseline, mid-intervention, and post intervention. Semi-structured interviews with participating teachers regarding adherence and participants’ attitudes were completed post-intervention. These teachers were randomly selected for interview. This perspective analytical cohort study included 235 post-primary school students between 11-13 years of age (100 boys and 135 girls) from five public Irish post-primary schools. Three schools received the intervention only; a 33hour interactive well-being learning unit, one school formed a control group and one school had participants in both the intervention and control group. Participating schools were a convenience sample. Data presented outlines baseline data collected pre-participation (0 hours completed). N = 18 junior certificate students returned all three questionnaires fully completed for a 56.3% return rate from 1 school, Intervention School #3. 94.4% (n = 17) of participants enjoy taking part in some form of PA, however only 5.5% (n = 1) of the participants took part in PA every day of the previous 7 days and only 5.5% (n = 1) of those surveyed participated in PA every day during a normal week. 55% (n = 11) had a low level of self-esteem, 50% (n = 9) fall within the normal range of self-esteem, and n = 0 surveyed demonstrated a high level of self-esteem. Female participants’ Mean score was higher than their male counterparts when MHL was compared. Correlation analyses revealed a small association between Self-esteem and Happiness (r = 0.549). Positive correlations were also revealed between MHL and Happiness, MHL and Self-esteem and Self-esteem and 60+ minutes of PA completed daily. IEGW? is a classroom-based with simple methods easy to implement, replicate and financially viable to both public and private schools. It’s unique dataset will allow for the evaluation of a societal approach to the psycho-social well-being and PA participation levels of adolescents. This research is a work in progress and future work is required to learn how to best support the implementation of ‘Is Everybody Going WeLL?’ as part of the school curriculum.Keywords: education, life-long learning, physical activity, psychosocial well-being
Procedia PDF Downloads 115104 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018
Authors: Laurie Dempsey
Abstract:
Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data
Procedia PDF Downloads 121103 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 151102 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 30101 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives
Authors: Alper T. Celebi, Ali Beskok
Abstract:
Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip
Procedia PDF Downloads 158100 Foucault and Governmentality: International Organizations and State Power
Authors: Sara Dragisic
Abstract:
Using the theoretical analysis of the birth of biopolitics that Foucault performed through the history of liberalism and neoliberalism, in this paper we will try to show how, precisely through problematizing the role of international institutions, the model of governance differs from previous ways of objectifying body and life. Are the state and its mechanisms still a Leviathan to fight against, or can it be even the driver of resistance against the proponents of modern governance and the biopolitical power? Do paradigmatic examples of biopolitics still appear through sovereignty and (international) law, or is it precisely this sphere that shows a significant dose of incompetence and powerlessness in relation to, not only the economic sphere (Foucault’s critique of neoliberalism) but also the new politics of freedom? Have the struggle for freedom and human rights, as well as the war on terrorism, opened a new spectrum of biopolitical processes, which are manifested precisely through new international institutions and humanitarian discourse? We will try to answer these questions, in the following way. On the one hand, we will show that the views of authors such as Agamben and Hardt and Negri, in whom the state and sovereignty are seen as enemies to be defeated or overcome, fail to see how such attempts could translate into the politicization of life like it is done in many examples through the doctrine of liberal interventionism and humanitarianism. On the other hand, we will point out that it is precisely the humanitarian discourse and the defense of the right to intervention that can be the incentive and basis for the politicization of the category of life and lead to the selective application of human rights. Zizek example of the killing of United Nations workers and doctors in a village during the Vietnam War, who were targeted even before police or soldiers, because they were precisely seen as a powerful instrument of American imperialism (as they were sincerely trying to help the population), will be focus of this part of the analysis. We’ll ask the question whether such interpretation is a kind of liquidation of the extreme left of the political (Laclau) or on this basis can be explained at least in part the need to review the functioning of international organizations, ranging from those dealing with humanitarian aid (and humanitarian military interventions) to those dealing with protection and the security of the population, primarily from growing terrorism. Based on the above examples, we will also explain how the discourse of terrorism itself plays a dual role: it can appear as a tool of liberal biopolitics, although, more superficially, it mostly appears as an enemy that wants to destroy the liberal system and its values. This brings us to the basic problem that this paper will tackle: do the mechanisms of institutional struggle for human rights and freedoms, which is often seen as opposed to the security mechanisms of the state, serve the governance of citizens in such a way that the latter themselves participate in producing biopolitical governmental practices? Is the freedom today "nothing but the correlative development of apparatuses of security" (Foucault)? Or, we can continue this line of Foucault’s argumentation and enhance the interpretation with the important question of what precisely today reflects the change in the rationality of governance in which society is transformed from a passive object into a subject of its own production. Finally, in order to understand the skills of biopolitical governance in modern civil society, it is necessary to pay attention to the status of international organizations, which seem to have become a significant place for the implementation of global governance. In this sense, the power of sovereignty can turn out to be an insufficiently strong power of security policy, which can go hand in hand with freedom policies, through neoliberal governmental techniques.Keywords: neoliberalism, Foucault, sovereignty, biopolitics, international organizations, NGOs, Agamben, Hardt&Negri, Zizek, security, state power
Procedia PDF Downloads 20699 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 8198 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 112