Search results for: content based image retrieval (CBIR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33009

Search results for: content based image retrieval (CBIR)

18399 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 208
18398 Role of Civil Society Institutions in Promoting Peace and Pluralism in the Rural, Mountainous Region of Pakistan

Authors: Mir Afzal

Abstract:

Introduction: Pakistan is a country with an ever-increasing population of largely diverse ethnic, cultural, religious and sectarian divisions. Whereas diversity is seen as a strength in many societies, in Pakistan, it has become a source of conflict and more a weakness than a strength due to lack of understanding and divisions based on ethnic, cultural, political, religious, and sectarian branding. However, amid conflicts and militancy across the country, the rural, mountainous communities in the Northern Areas of Pakistan enjoy not only peace and harmony but also a continuous process of social and economic transformation supported by strong civil society institutions. These community-based institutions have organized the rural, mountainous people of diverse ethnic and religious backgrounds into village organizations, women organizations, and Local Support Organizations engaged in self-help development and peace building in the region. The Study and its Methodology: A qualitative study was conducted in one district of the Northern Pakistan to explore the contributions of the civil society institutions (CSIs) and community-based organizations to uplifting the educational and socio-economic conditions of the people with an ultimate aim of developing a thriving, peaceful and pluralistic society in this mountainous region. The study employed an eclectic set of tools, including interviews, focused group discussions, observations of CSIs’ interventions, and analysis of documents, to generate rich data on the overall role and contributions of CSIs in promoting peace and pluralism in the region. Significance of the Study: Common experiences and empirical studies reveal that such interventions by CSIs have not only contributed to the socio-economic, educational, health and cultural development of these regions but these interventions have really transformed the rural, mountainous people into organized and forward looking communities. However, how such interventions have contributed to promoting pluralism and appreciation for diversity in these regions had been an unexplored but significant area. Therefore this qualitative research study funded by the Higher Education Commission of Pakistan was carried out by the Aga Khan University Institute for Educational Development to explore the role and contributions of CSIs in promoting peace and pluralism and appreciations for diversity in one district of Northern Pakistan which is home to people of different ethnic, religious, cultural and social backgrounds. Findings and Conclusions: The study has a comprehensive list of findings and conclusions covering various aspects of CSIs and their contributions to the transformation and peaceful co-existence of rural communities in the regions. However, this paper discusses only four major contributions of CSIs, namely enhancing economic capacity, community mobilization and organization, increasing access and quality of education, and building partnerships. It also discusses the factors influencing the role of CSIs, the issues, implications, and recommendations for CSIs, policy makers, donors and development agencies, and researchers. The paper concludes that by strengthening strong networks of CSIs and community based organizations, Pakistan will not only uplift its socio-economic attainments but it will also be able to address the critical challenges of terrorism, sectarianism, and other divisions and conflicts in its various regions.

Keywords: civil society, Pakistan, peace, rural

Procedia PDF Downloads 494
18397 The Analysis of Personalized Low-Dose Computed Tomography Protocol Based on Cumulative Effective Radiation Dose and Cumulative Organ Dose for Patients with Breast Cancer with Regular Chest Computed Tomography Follow up

Authors: Okhee Woo

Abstract:

Purpose: The aim of this study is to evaluate 2-year cumulative effective radiation dose and cumulative organ dose on regular follow-up computed tomography (CT) scans in patients with breast cancer and to establish personalized low-dose CT protocol. Methods and Materials: A retrospective study was performed on the patients with breast cancer who were diagnosed and managed consistently on the basis of routine breast cancer follow-up protocol between 2012-01 and 2016-06. Based on ICRP (International Commission on Radiological Protection) 103, the cumulative effective radiation doses of each patient for 2-year follow-up were analyzed using the commercial radiation management software (Radimetrics, Bayer healthcare). The personalized effective doses on each organ were analyzed in detail by the software-providing Monte Carlo simulation. Results: A total of 3822 CT scans on 490 patients was evaluated (age: 52.32±10.69). The mean scan number for each patient was 7.8±4.54. Each patient was exposed 95.54±63.24 mSv of radiation for 2 years. The cumulative CT radiation dose was significantly higher in patients with lymph node metastasis (p = 0.00). The HER-2 positive patients were more exposed to radiation compared to estrogen or progesterone receptor positive patient (p = 0.00). There was no difference in the cumulative effective radiation dose with different age groups. Conclusion: To acknowledge how much radiation exposed to a patient is a starting point of management of radiation exposure for patients with long-term CT follow-up. The precise and personalized protocol, as well as iterative reconstruction, may reduce hazard from unnecessary radiation exposure.

Keywords: computed tomography, breast cancer, effective radiation dose, cumulative organ dose

Procedia PDF Downloads 170
18396 Conservation Agriculture under Mediterranean Climate: Effects on below and Above-Ground Processes during Wheat Cultivation

Authors: Vasiliki Kolake, Christos Kavalaris, Sofia Megoudi, Maria Maxouri, Panagiotis A. Karas, Aris Kyparissis, Efi Levizou

Abstract:

Conservation agriculture (CA), is a production system approach that can tackle the challenges of climate change mainly through facilitating carbon storage into the soil and increasing crop resilience. This is extremely important for the vulnerable Mediterranean agroecosystems, which already face adverse environmental conditions. The agronomic practices used in CA, i.e. permanent soil cover and no-tillage, result in reduced soil erosion and increased soil organic matter, preservation of water and improvement of quality and fertility of the soil in the long-term. Thus the functional characteristics and processes of the soil are considerably affected by the implementation of CA. The aim of the present work was to assess the effects of CA on soil nitrification potential and mycorrhizal colonization about the above-ground production in a wheat field. Two adjacent but independent field sites of 1.5ha each were used (Thessaly plain, Central Greece), comprising the no-till and conventional tillage treatments. The no-tillage site was covered by residues of the previous crop (cotton). Potential nitrification and the nitrate and ammonium content of the soil were measured at two different soil depths (3 and 15cm) at 20-days intervals throughout the growth period. Additionally, the leaf area index (LAI) was monitored at the same time-course. The mycorrhizal colonization was measured at the commencement and end of the experiment. At the final harvest, total yield and plant biomass were also recorded. The results indicate that wheat yield was considerably favored by CA practices, exhibiting a 42% increase compared to the conventional tillage treatment. The superior performance of the CA crop was also depicted in the above-ground plant biomass, where a 26% increase was recorded. LAI, which is considered a reliable growth index, did not show statistically significant differences between treatments throughout the growth period. On the contrary, significant differences were recorded in endomycorrhizal colonization one day before the final harvest, with CA plants exhibiting 20% colonization, while the conventional tillage plants hardly reached 1%. The on-going analyses of potential nitrification measurements, as well as nitrate and ammonium determination, will shed light on the effects of CA on key processes in the soil. These results will integrate the assessment of CA impact on certain below and above-ground processes during wheat cultivation under the Mediterranean climate.

Keywords: conservation agriculture, LAI, mycorrhizal colonization, potential nitrification, wheat, yield

Procedia PDF Downloads 105
18395 Defining the Limits of No Load Test Parameters at Over Excitation to Ensure No Over-Fluxing of Core Based on a Case Study: A Perspective From Utilities

Authors: Pranjal Johri, Misbah Ul-Islam

Abstract:

Power Transformers are one of the most critical and failure prone entities in an electrical power system. It is an established practice that each design of a power transformer has to undergo numerous type tests for design validation and routine tests are performed on each and every power transformer before dispatch from manufacturer’s works. Different countries follow different standards for testing the transformers. Most common and widely followed standard for Power Transformers is IEC 60076 series. Though these standards put up a strict testing requirements for power transformers, however, few aspects of transformer characteristics and guaranteed parameters can be ensured by some additional tests. Based on certain observations during routine test of a transformer and analyzing the data of a large fleet of transformers, three propositions have been discussed and put forward to be included in test schedules and standards. The observations in the routine test raised questions on design flux density of transformer. In order to ensure that flux density in any part of the core & yoke does not exceed 1.9 tesla at 1.1 pu as well, following propositions need to be followed during testing:  From the data studied, it was evident that generally NLC at 1.1 pu is apporx. 3 times of No Load Current at 1 pu voltage.  During testing the power factor at 1.1 pu excitation, it must be comparable to calculated values from the Cold Rolled Grain Oriented steel material curves, including building factor.  A limit of 3 % to be extended for higher than rated voltages on difference in Vavg and Vrms, during no load testing.  Extended over excitation test to be done in case above propositions are observed to be violated during testing.

Keywords: power transfoemrs, no load current, DGA, power factor

Procedia PDF Downloads 74
18394 Biosignal Recognition for Personal Identification

Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor

Abstract:

A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.

Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification

Procedia PDF Downloads 262
18393 Modeling Soil Erosion and Sediment Yield in Geba Catchment, Ethiopia

Authors: Gebremedhin Kiros, Amba Shetty, Lakshman Nandagiri

Abstract:

Soil erosion is a major threat to the sustainability of land and water resources in the catchment and there is a need to identify critical areas of erosion so that suitable conservation measures may be adopted. The present study was taken up to understand the temporal and spatial distribution of soil erosion and daily sediment yield in Geba catchment (5137 km2) located in the Northern Highlands of Ethiopia. Soil and Water Assessment Tool (SWAT) was applied to the Geba catchment using data pertaining to rainfall, climate, soils, topography and land use/land cover (LU/LC) for the historical period 2000-2013. LU/LC distribution in the catchment was characterized using LANDSAT satellite imagery and the GIS-based ArcSWAT version of the model. The model was calibrated and validated using sediment concentration measurements made at the catchment outlet. The catchment was divided into 13 sub-basins and based on estimated soil erosion, these were prioritized on the basis of susceptibility to soil erosion. Model results indicated that the average sediment yield estimated of the catchment was 12.23 tons/ha/yr. The generated soil loss map indicated that a large portion of the catchment has high erosion rates resulting in significantly large sediment yield at the outlet. Steep and unstable terrain, the occurrence of highly erodible soils and low vegetation cover appeared to favor high soil erosion. Results obtained from this study prove useful in adopting in targeted soil and water conservation measures and promote sustainable management of natural resources in the Geba and similar catchments in the region.

Keywords: Ethiopia, Geba catchment, MUSLE, sediment yield, SWAT Model

Procedia PDF Downloads 297
18392 Coming Closer to Communities of Practice through Situated Learning: The Case Study of Polish-English, English-Polish Undergraduate BA Level Language for Specific Purposes of Translation Class

Authors: Marta Lisowska

Abstract:

The growing trend of market specialization imposes upon translators the need for proficiency in the working knowledge of specialist discourse. The notion of specialization differs from a broad general category to a highly specialized narrow field. The specialised discourse is used in the channel of communication based upon distinctive features typical for communities of practice whose co-existence is codified and hermetically locked against outsiders. Consequently, any translator deprived of professional discourse competence and social skills is incapable of providing competent translation product from source language into target language. In this paper, we report on research that explores the pedagogical practices aiming to bridge the dichotomy between the professionals and the specialist translators, while accounting for the reality of the world of professional communities entered by undergraduates on two levels: the text-based generic, and the social one. Drawing from the functional social constructivist approach, seen here as situated learning, this paper reports on the case of English-Polish, Polish-English undergraduate BA Level LSP of law translation class run in line with the simulated classroom-based and the reality-based (apprenticeship) approach. This blended method serves the purpose of introducing the young trainees to the professional world. The research provides new insights into how the LSP translation undergraduates become legitimized through discursive and social participation and engagement. The undergraduates, situated peripherally at the outset, experience their own transformation towards becoming members of these professional groups. With subjective evaluation, the trainees take a stance on this dual mode class and development of their skills. Comparing and contrasting their own work done in line with two models of translation teaching: authentic and near-authentic, the undergraduates answer research questions devised by a questionnaire survey The responses take us closer to how students feel about their LSP translation competence development. The major findings show how the trainees perceive the benefits and hardships of their functional translation class. In terms of skills, they related to communication as the most enhanced one; they highly valued the fact of being ‘exposed’ to a variety of texts (cf. multi literalism), team work, learning how to schedule work, IT skills boost and the ability to learn how to work individually. Another finding indicates that students struggled most with specialized language, and co-working with other students. The short-term research shows the momentum when the undergraduate LSP translation trainees entered the path of transformation i.e. gained consciousness of ‘how it is’ to be a participant-translator of real-life communities of practice, gaining pragmatic dint of the social and linguistic skills understood here as discursive competence (text > genre > discourse > professional practice). The undergraduates need to be aware of the work they have to do and challenges they are to face before arriving at the expert level of professional translation competence.

Keywords: communities of practice in LSP translation teaching, learning LSP translation as situated experience, peripheral participation, professional discourse for LSP translation teaching, professional translation competence

Procedia PDF Downloads 82
18391 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States

Authors: Lauren B. Birney

Abstract:

This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.

Keywords: STEM education, environmental restoration science, technology, citizen science

Procedia PDF Downloads 68
18390 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage

Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik

Abstract:

The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.

Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis

Procedia PDF Downloads 302
18389 Ecosystem Model for Environmental Applications

Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru

Abstract:

This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.

Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions

Procedia PDF Downloads 399
18388 Disruptions to Medical Education during COVID-19: Perceptions and Recommendations from Students at the University of the West, Indies, Jamaica

Authors: Charléa M. Smith, Raiden L. Schodowski, Arletty Pinel

Abstract:

Due to the COVID-19 pandemic, the Faculty of Medical Sciences of The University of the West Indies (UWI) Mona in Kingston, Jamaica, had to rapidly migrate to digital and blended learning. Students in the preclinical stage of the program transitioned to full-time online learning, while students in the clinical stage experienced decreased daily patient contact and the implementation of a blend of online lectures and virtual clinical practice. Such sudden changes were coupled with the institutional pressure of the need to introduce a novel approach to education without much time for preparation, as well as additional strain endured by the faculty, who were overwhelmed by serving as frontline workers. During the period July 20 to August 23, 2021, this study surveyed preclinical and clinical students to capture their experiences with these changes and their recommendations for future use of digital modalities of learning to enhance medical education. It was conducted with a fellow student of the 2021 cohort of the MultiPod mentoring program. A questionnaire was developed and distributed digitally via WhatsApp to all medical students of the UWI Mona campus to assess students’ experiences and perceptions of the advantages, challenges, and impact on individual knowledge proficiencies brought about by the transition to predominantly digital learning environments. 108 students replied, 53.7% preclinical and 46.3% clinical. 67.6% of the total were female and 30.6 % were male; 1.8% did not identify themselves by gender. 67.2% of preclinical students preferred blended learning and 60.3% considered that the content presented did not prepare them for clinical work. Only 31% considered that the online classes were interactive and encouraged student participation. 84.5% missed socialization with classmates and friends and 79.3% missed a focused environment for learning. 80% of the clinical students felt that they had not learned all that they expected and only 34% had virtual interaction with patients, mostly by telephone and video calls. Observing direct consultations was considered the most useful, yet this was the least-used modality. 96% of the preclinical students and 100% of the clinical ones supplemented their learning with additional online tools. The main recommendations from the survey are the use of interactive teaching strategies, more discussion time with lecturers, and increased virtual interactions with patients. Universities are returning to face-to-face learning, yet it is unlikely that blended education will disappear. This study demonstrates that students’ perceptions of their experience during mobility restrictions must be taken into consideration in creating more effective, inclusive, and efficient blended learning opportunities.

Keywords: blended learning, digital learning, medical education, student perceptions

Procedia PDF Downloads 140
18387 The Diversity of Contexts within Which Adolescents Engage with Digital Media: Contributing to More Challenging Tasks for Parents and a Need for Third Party Mediation

Authors: Ifeanyi Adigwe, Thomas Van der Walt

Abstract:

Digital media has been integrated into the social and entertainment life of young children, and as such, the impact of digital media appears to affect young people of all ages and it is believed that this will continue to shape the world of young children. Since, technological advancement of digital media presents adolescents with diverse contexts, platforms and avenues to engage with digital media outside the home environment and from parents' supervision, a wide range of new challenges has further complicated the already difficult tasks for parents and altered the landscape of parenting. Despite the fact that adolescents now have access to a wide range of digital media technologies both at home and in the learning environment, parenting practices such as active, restrictive, co-use, participatory and technical mediations are important in mitigating of online risks adolescents may encounter as a result of digital media use. However, these mediation practices only focus on the home environment including digital media present in the home and may not necessarily transcend outside the home and other learning environments where adolescents use digital media for school work and other activities. This poses the question of who mediates adolescent's digital media use outside the home environment. The learning environment could be a ''loose platform'' where an adolescent can maximise digital media use considering the fact that there is no restriction in terms of content and time allotted to using digital media during school hours. That is to say that an adolescent can play the ''bad boy'' online in school because there is little or no restriction of digital media use and be exposed to online risks and play the ''good boy'' at home because of ''heavy'' parental mediation. This is the reason why parent mediation practices have been ineffective because a parent may not be able to track adolescents digital media use considering the diversity of contexts, platforms and avenues adolescents use digital media. This study argues that due to the diverse nature of digital media technology, parents may not be able to monitor the 'whereabouts' of their children in the digital space. This is because adolescent digital media usage may not only be confined to the home environment but other learning environments like schools. This calls for urgent attention on the part of teachers to understand the intricacies of how digital media continue to shape the world in which young children are developing and learning. It is, therefore, imperative for parents to liaise with the schools of their children to mediate digital media use during school hours. The implication of parents- teachers mediation practices are discussed. The article concludes by suggesting that third party mediation by teachers in schools and other learning environments should be encouraged and future research needs to consider the emergent strategy of teacher-children mediation approach and the implication for policy for both the home and learning environments.

Keywords: digital media, digital age, parent mediation, third party mediation

Procedia PDF Downloads 139
18386 Temporal Focus Scale: Examination of the Reliability and Validity in Japanese Adolescents and Young Adults

Authors: Yuta Chishima, Tatsuya Murakami, Michael McKay

Abstract:

Temporal focus is described as one component of an individual’s time perspective and defined as the attention individuals devote to thinking about the past, present, and future. It affects how people incorporate perceptions about past experiences, current situations, and future expectations into their attitudes, cognitions, and behavior. The 12-item Temporal Focus Scale (TFS) is comprised of three-factors (past, current and future focus). The purpose of this study was to examine the reliability and validity of TFS scores in Japanese adolescents and young adults. The TFS was translated into Japanese by a professional translator, and the original author confirmed the back translated items. Study 1 involved 979 Japanese university students aged 18-25 years old in a questionnaire-based study. The hypothesized three-factor structure (with reliability) was confirmed, although there were problems with item 10. Internal consistency estimates for scores without item 10 were over .70, and test-retest reliability was also adequate. To verify the concurrent and convergent validity, we tested the relationship between TFS scores and life satisfaction, time perspective, self-esteem, and career efficacy. Results of correlational analyses supported our hypotheses. Specifically, future focus was strongly correlated to career efficacy, while past and current focus was not. Study 2 involved 1030 Japanese junior and junior high school students aged 12-18 years old in a questionnaire-based study, and results of multigroup analyses supported the age invariance of the TFS.

Keywords: Japanese, reliability, scale, temporal focus, validity

Procedia PDF Downloads 329
18385 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment

Authors: Muhammad Muzamal, Anders Rasmuson

Abstract:

Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.

Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment

Procedia PDF Downloads 385
18384 Making Unorganized Social Groups Responsible for Climate Change: Structural Analysis

Authors: Vojtěch Svěrák

Abstract:

Climate change ethics have recently shifted away from individualistic paradigms towards concepts of shared or collective responsibility. Despite this evolving trend, a noticeable gap remains: a lack of research exclusively addressing the moral responsibility of specific unorganized social groups. The primary objective of the article is to fill this gap. The article employs the structuralist methodological approach proposed by some feminist philosophers, utilizing structural analysis to explain the existence of social groups. The argument is made for the integration of this framework with the so-called forward-looking Social Connection Model (SCM) of responsibility, which ascribes responsibilities to individuals based on their participation in social structures. The article offers an extension of this model to justify the responsibility of unorganized social groups. The major finding of the study is that although members of unorganized groups are loosely connected, collectively they instantiate specific external social structures, share social positioning, and the notion of responsibility could be based on that. Specifically, if the structure produces harm or perpetuates injustices, and the group both benefits from and possesses the capacity to significantly influence the structure, a greater degree of responsibility should be attributed to the group as a whole. This thesis is applied and justified within the context of climate change, based on the asymmetrical positioning of different social groups. Climate change creates a triple inequality: in contribution, vulnerability, and mitigation. The study posits that different degrees of group responsibility could be drawn from these inequalities. Two social groups serve as a case study for the article: first, the Pakistan lower class, consisting of people living below the national poverty line, with a low greenhouse gas emissions rate, severe climate change-related vulnerability due to the lack of adaptation measures, and with very limited options to participate in the mitigation of climate change. Second, the so-called polluter elite, defined by members' investments in polluting companies and high-carbon lifestyles, thus with an interest in the continuation of structures leading to climate change. The first identified group cannot be held responsible for climate change, but their group interest lies in structural change and should be collectively maintained. On the other hand, the responsibility of the second identified group is significant and can be fulfilled by a justified demand for some political changes. The proposed approach of group responsibility is suggested to help navigate climate justice discourse and environmental policies, thus helping with the sustainability transition.

Keywords: collective responsibility, climate justice, climate change ethics, group responsibility, social ontology, structural analysis

Procedia PDF Downloads 40
18383 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 164
18382 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling

Procedia PDF Downloads 128
18381 Preliminary Roadway Alignment Design: A Spatial-Data Optimization Approach

Authors: Yassir Abdelrazig, Ren Moses

Abstract:

Roadway planning and design is a very complex process involving five key phases before a project is completed; planning, project development, final design, right-of-way, and construction. The planning phase for a new roadway transportation project is a very critical phase as it greatly affects all latter phases of the project. A location study is usually performed during the preliminary planning phase in a new roadway project. The objective of the location study is to develop alignment alternatives that are cost efficient considering land acquisition and construction costs. This paper describes a methodology to develop optimal preliminary roadway alignments utilizing spatial-data. Four optimization criteria are taken into consideration; roadway length, land cost, land slope, and environmental impacts. The basic concept of the methodology is to convert the proposed project area into a grid, which represents the search space for an optimal alignment. The aforementioned optimization criteria are represented in each of the grid’s cells. A spatial-data optimization technique is utilized to find the optimal alignment in the search space based on the four optimization criteria. Two case studies for new roadway projects in Duval County in the State of Florida are presented to illustrate the methodology. The optimization output alignments are compared to the proposed Florida Department of Transportation (FDOT) alignments. The comparison is based on right-of-way costs for the alignments. For both case studies, the right-of-way costs for the developed optimal alignments were found to be significantly lower than the FDOT alignments.

Keywords: gemoetric design, optimization, planning, roadway planning, roadway design

Procedia PDF Downloads 319
18380 Dynamic Programming Based Algorithm for the Unit Commitment of the Transmission-Constrained Multi-Site Combined Heat and Power System

Authors: A. Rong, P. B. Luh, R. Lahdelma

Abstract:

High penetration of intermittent renewable energy sources (RES) such as solar power and wind power into the energy system has caused temporal and spatial imbalance between electric power supply and demand for some countries and regions. This brings about the critical need for coordinating power production and power exchange for different regions. As compared with the power-only systems, the combined heat and power (CHP) systems can provide additional flexibility of utilizing RES by exploiting the interdependence of power and heat production in the CHP plant. In the CHP system, power production can be influenced by adjusting heat production level and electric power can be used to satisfy heat demand by electric boiler or heat pump in conjunction with heat storage, which is much cheaper than electric storage. This paper addresses multi-site CHP systems without considering RES, which lay foundation for handling penetration of RES. The problem under study is the unit commitment (UC) of the transmission-constrained multi-site CHP systems. We solve the problem by combining linear relaxation of ON/OFF states and sequential dynamic programming (DP) techniques, where relaxed states are used to reduce the dimension of the UC problem and DP for improving the solution quality. Numerical results for daily scheduling with realistic models and data show that DP-based algorithm is from a few to a few hundred times faster than CPLEX (standard commercial optimization software) with good solution accuracy (less than 1% relative gap from the optimal solution on the average).

Keywords: dynamic programming, multi-site combined heat and power system, relaxed states, transmission-constrained generation unit commitment

Procedia PDF Downloads 346
18379 Precoding-Assisted Frequency Division Multiple Access Transmission Scheme: A Cyclic Prefixes- Available Modulation-Based Filter Bank Multi-Carrier Technique

Authors: Ying Wang, Jianhong Xiang, Yu Zhong

Abstract:

The offset Quadrature Amplitude Modulation-based Filter Bank Multi-Carrier (FBMC) system provides superior spectral properties over Orthogonal Frequency Division Multiplexing. However, seriously affected by imaginary interference, its performances are hampered in many areas. In this paper, we propose a Precoding-Assisted Frequency Division Multiple Access (PA-FDMA) modulation scheme. By spreading FBMC symbols into the frequency domain and transmitting them with a precoding matrix, the impact of imaginary interference can be eliminated. Specifically, we first generate the coding pre-solution matrix with a nonuniform Fast Fourier Transform and pick the best columns by introducing auxiliary factors. Secondly, according to the column indexes, we obtain the precoding matrix for one symbol and impose scaling factors to ensure that the power is approximately constant throughout the transmission time. Finally, we map the precoding matrix of one symbol to multiple symbols and transmit multiple data frames, thus achieving frequency-division multiple access. Additionally, observing the interference between adjacent frames, we mitigate them by adding frequency Cyclic Prefixes (CP) and evaluating them with a signal-to-interference ratio. Note that PA-FDMA can be considered a CP-available FBMC technique because the underlying strategy is FBMC. Simulation results show that the proposed scheme has better performance compared to Single Carrier Frequency Division Multiple Access (SC-FDMA), etc.

Keywords: PA-FDMA, SC-FDMA, FBMC, non-uniform fast fourier transform

Procedia PDF Downloads 37
18378 “Lightyear” – The Battle for LGBTQIA+ Representation Behind Disney/Pixar’s Failed Blockbuster

Authors: Ema Vitória Fonseca Lavrador

Abstract:

In this work, we intend to explore the impact that the film "Lightyear" (2022) had on the social context of its production, distribution, and reception. This film, produced by Walt Disney Animation Studios and Pixar Animation Studios, depicts the story of Buzz Lightyear, a Space Ranger from which the character of the same name in the "Toy Story" film franchise is based. This prequel was predicted to be the blockbuster of the year, but it was a financial fiasco and the subject of numerous controversies, which also caused it to be drowned out by the film "Minions: The Rise of Gru" (2022). The reason for its failure is not based on the film's narrative or quality but on its controversial context for being a commitment to LGBTQIA+ representation in an unexpected way, by featuring a same-sex couple and showing a kiss shared by them. This representation cost Disney distribution in countries against LGBTQIA+ representation in media and involved Disney in major disagreements with fans and politicians, especially for being a direct opposition to the Florida House Bill 1557, also called the “Don't Say Gay” bill. Many major companies have taken a stand against this law because it jeopardizes the safety of the LGBTQIA+ community, and, although Disney initially cut the kiss off the film, pressure from the staff and audience resulted in unprecedented progress. For featuring a brief homosexual kiss, its exhibition was banned in several countries and discouraged by the same public that was previously the focus of Disney's attention, as this is a conservative “family-friendly” branded company. We believe it is relevant to study the case of "Lightyear" because it is a work that raises awareness and promotes representation of communities affected during the dark times while less legislation is being approved to protect the rights and safety of queer people.

Keywords: Don’t Say Gay” bill, gender stereotypes, LGBTQIA+ representation, lightyear, Disney/Pixar

Procedia PDF Downloads 61
18377 In Vitro Evaluation of a Chitosan-Based Adhesive to Treat Bone Fractures

Authors: Francisco J. Cedano, Laura M. Pinzón, Camila I. Castro, Felipe Salcedo, Juan P. Casas, Juan C. Briceño

Abstract:

Complex fractures located in articular surfaces are challenging to treat and their reduction with conventional treatments could compromise the functionality of the affected limb. An adhesive material to treat those fractures is desirable for orthopedic surgeons. This adhesive must be biocompatible and have a high adhesion to bone surface in an aqueous environment. The proposed adhesive is based on chitosan, given its adhesive and biocompatibility properties. Chitosan is mixed with calcium carbonate and hydroxyapatite, which contribute to structural support and a gel like behavior, and glutaraldehyde is used as a cross-linking agent to keep the adhesive mechanical performance in aqueous environment. This work aims to evaluate the rheological, adhesion strength and biocompatibility properties of the proposed adhesive using in vitro tests. The gelification process of the adhesive was monitored by oscillatory rheometry in an ARG-2 TA Instruments rheometer, using a parallel plate geometry of 22 mm and a gap of 1 mm. Time sweep experiments were conducted at 1 Hz frequency, 1% strain and 37°C from 0 to 2400 s. Adhesion strength is measured using a butt joint test with bovine cancellous bone fragments as substrates. The test is conducted at 5 min, 20min and 24 hours after curing the adhesive under water at 37°C. Biocompatibility is evaluated by a cytotoxicity test in a fibroblast cell culture using MTT assay and SEM. Rheological results concluded that the average gelification time of the adhesive is 820±107 s, also it reaches storage modulus magnitudes up to 106 Pa; The adhesive show solid-like behavior. Butt joint test showed 28.6 ± 9.2 kPa of tensile bond strength for the adhesive cured for 24 hours. Also there was no significant difference in adhesion strength between 20 minutes and 24 hours. MTT showed 70 ± 23 % of active cells at sixth day of culture, this percentage is estimated respect to a positive control (only cells with culture medium and bovine serum). High vacuum SEM observation permitted to localize and study the morphology of fibroblasts presented in the adhesive. All captured fibroblasts presented in SEM typical flatted structure with filopodia growth attached to adhesive surface. This project reports an adhesive based on chitosan that is biocompatible due to high active cells presented in MTT test and these results were correlated using SEM. Also, it has adhesion properties in conditions that model the clinical application, and the adhesion strength do not decrease between 5 minutes and 24 hours.

Keywords: bioadhesive, bone adhesive, calcium carbonate, chitosan, hydroxyapatite, glutaraldehyde

Procedia PDF Downloads 310
18376 Factors Affecting the Adoption of Cloud Business Intelligence among Healthcare Sector: A Case Study of Saudi Arabia

Authors: Raed Alsufyani, Hissam Tawfik, Victor Chang, Muthu Ramachandran

Abstract:

This study investigates the factors that influence the decision by players in the healthcare sector to embrace Cloud Business Intelligence Technology with a focus on healthcare organizations in Saudi Arabia. To bring this matter into perspective, this study primarily considers the Technology-Organization-Environment (TOE) framework and the Human Organization-Technology (HOT) fit model. A survey was hypothetically designed based on literature review and was carried out online. Quantitative data obtained was processed from descriptive and one-way frequency statistics to inferential and regression analysis. Data were analysed to establish factors that influence the decision to adopt Cloud Business intelligence technology in the healthcare sector. The implication of the identified factors was measured, and all assumptions were tested. 66.70% of participants in healthcare organization backed the intention to adopt cloud business intelligence system. 99.4% of these participants considered security concerns and privacy risk have been the most significant factors in the adoption of cloud Business Intelligence (CBI) system. Through regression analysis hypothesis testing point that usefulness, service quality, relative advantage, IT infrastructure preparedness, organization structure; vendor support, perceived technical competence, government support, and top management support positively and significantly influence the adoption of (CBI) system. The paper presents quantitative phase that is a part of an on-going project. The project will be based on the consequences learned from this study.

Keywords: cloud computing, business intelligence, HOT-fit model, TOE, healthcare and innovation adoption

Procedia PDF Downloads 147
18375 Adaptative Metabolism of Lactic Acid Bacteria during Brewers' Spent Grain Fermentation

Authors: M. Acin-Albiac, P. Filannino, R. Coda, Carlo G. Rizzello, M. Gobbetti, R. Di Cagno

Abstract:

Demand for smart management of large amounts of agro-food by-products has become an area of major environmental and economic importance worldwide. Brewers' spent grain (BSG), the most abundant by-product generated in the beer-brewing process, represents an example of valuable raw material and source of health-promoting compounds. To the date, the valorization of BSG as a food ingredient has been limited due to poor technological and sensory properties. Tailored bioprocessing through lactic acid bacteria (LAB) fermentation is a versatile and sustainable means for the exploitation of food industry by-products. Indigestible carbohydrates (e.g., hemicelluloses and celluloses), high phenolic content, and mostly lignin make of BSG a hostile environment for microbial survival. Hence, the selection of tailored starters is required for successful fermentation. Our study investigated the metabolic strategies of Leuconostoc pseudomesenteroides and Lactobacillus plantarum strains to exploit BSG as a food ingredient. Two distinctive BSG samples from different breweries (Italian IT- and Finish FL-BSG) were microbially and chemically characterized. Growth kinetics, organic acid profiles, and the evolution of phenolic profiles during the fermentation in two BSG model media were determined. The results were further complemented with gene expression targeting genes involved in the degradation cellulose, hemicelluloses building blocks, and the metabolism of anti-nutritional factors. Overall, the results were LAB genus dependent showing distinctive metabolic capabilities. Leuc. pseudomesenteroides DSM 20193 may degrade BSG xylans while sucrose metabolism could be furtherly exploited for extracellular polymeric substances (EPS) production to enhance BSG pro-technological properties. Although L. plantarum strains may follow the same metabolic strategies during BSG fermentation, the mode of action to pursue such strategies was strain-dependent. L. plantarum PU1 showed a great preference for β-galactans compared to strain WCFS1, while the preference for arabinose occurred at different metabolic phases. Phenolic compounds profiling highlighted a novel metabolic route for lignin metabolism. These findings will allow an improvement of understanding of how lactic acid bacteria transform BSG into economically valuable food ingredients.

Keywords: brewery by-product valorization, metabolism of plant phenolics, metabolism of lactic acid bacteria, gene expression

Procedia PDF Downloads 112
18374 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process

Authors: Venkata Ramanaiah, Shabina Khanam

Abstract:

Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.

Keywords: coal consumption, energy conservation, process integration, sponge iron plant

Procedia PDF Downloads 128
18373 Multiple Pen and Touch Interaction on Interactive LCDs

Authors: Andreas Kunz, Ali Alavi

Abstract:

In this paper, we present a simple active stylus for interactive IR-based tabletop systems. Such tables offer a set of tags for realizing tangible user interfaces, which can only be applied to objects having a relatively big contacting area with the interactive surface. The stylus has a unique address and thus can be clearly distinguished from other styli, objects or finger touches that might simultaneously occur on the interactive surface.

Keywords: interactive screens, pen, tangibles, user interfaces

Procedia PDF Downloads 382
18372 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 42
18371 The Agri-Environmental Instruments in Agricultural Policy to Reduce Nitrogen Pollution

Authors: Flavio Gazzani

Abstract:

Nitrogen is an important agricultural input that is critical for the production. However, the introduction of large amounts of nitrogen into the environment has a number of undesirable impacts such as: the loss of biodiversity, eutrophication of waters and soils, drinking water pollution, acidification, greenhouse gas emissions, human health risks. It is a challenge to sustain or increase food production and at the same time reduce losses of reactive nitrogen to the environment, but there are many potential benefits associated with improving nitrogen use efficiency. Reducing nutrient losses from agriculture is crucial to the successful implementation of agricultural policy. Traditional regulatory instruments applied to implement environmental policies to reduce environmental impacts from nitrogen fertilizers, despite some successes, failed to address many environmental challenges and imposed high costs on the society to achieve environmental quality objectives. As a result, economic instruments started to be recognized for their flexibility and cost-effectiveness. The objective of the research project is to analyze the potential for increased use of market-based instruments in nitrogen control policy. The report reviews existing knowledge, bringing different studies together to assess the global nitrogen situation and the most relevant environmental management policy that aims to reduce pollution in a sustainable way without affect negatively agriculture production and food price. This analysis provides some guidance on how different market based instruments might be orchestrated in an overall policy framework to the development and assessment of sustainable nitrogen management from the economics, environmental and food security point of view.

Keywords: nitrogen emissions, chemical fertilizers, eutrophication, non-point of source pollution, dairy farm

Procedia PDF Downloads 312
18370 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 95