Search results for: drawing digital tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7940

Search results for: drawing digital tool

680 Effects of Lime and N100 on the Growth and Phytoextraction Capability of a Willow Variety (S. Viminalis × S. Schwerinii × S. Dasyclados) Grown in Contaminated Soils

Authors: Mir Md. Abdus Salam, Muhammad Mohsin, Pertti Pulkkinen, Paavo Pelkonen, Ari Pappinen

Abstract:

Soil and water pollution caused by extensive mining practices can adversely affect environmental components, such as humans, animals, and plants. Despite a generally positive contribution to society, mining practices have become a serious threat to biological systems. As metals do not degrade completely, they require immobilization, toxicity reduction, or removal. A greenhouse experiment was conducted to evaluate the effects of lime and N100 (11-amino-1-hydroxyundecylidene) chelate amendment on the growth and phytoextraction potential of the willow variety Klara (S. viminalis × S. schwerinii × S. dasyclados) grown in soils heavily contaminated with copper (Cu). The plants were irrigated with tap or processed water (mine wastewater). The sequential extraction technique and inductively coupled plasma-mass spectrometry (ICP-MS) tool were used to determine the extractable metals and evaluate the fraction of metals in the soil that could be potentially available for plant uptake. The results suggest that the combined effects of the contaminated soil and processed water inhibited growth parameter values. In contrast, the accumulation of Cu in the plant tissues was increased compared to the control. When the soil was supplemented with lime and N100; growth parameter and resistance capacity were significantly higher compared to unamended soil treatments, especially in the contaminated soil treatments. The combined lime- and N100-amended soil treatment produced higher growth rate of biomass, resistance capacity and phytoextraction efficiency levels relative to either the lime-amended or the N100-amended soil treatments. This study provides practical evidence of the efficient chelate-assisted phytoextraction capability of Klara and highlights its potential as a viable and inexpensive novel approach for in-situ remediation of Cu-contaminated soils and mine wastewaters. Abandoned agricultural, industrial and mining sites can also be utilized by a Salix afforestation program without conflict with the production of food crops. This kind of program may create opportunities for bioenergy production and economic development, but contamination levels should be examined before bioenergy products are used.

Keywords: copper, Klara, lime, N100, phytoextraction

Procedia PDF Downloads 138
679 Unleashing the Potential of Green Finance in Architecture: A Promising Path for Balkan Countries

Authors: Luan Vardari, Dena Arapi Vardari

Abstract:

The Balkan countries, known for their diverse landscapes and cultural heritage, face the dual challenge of promoting economic growth while addressing pressing environmental concerns. In recent years, the concept of green finance has emerged as a powerful tool to achieve sustainable development and mitigate the environmental impact of various sectors, including architecture. This extended abstract explores the untapped potential of green finance in architecture within the Balkan region and highlights its role in driving sustainable construction practices and fostering a greener future. The abstract begins by defining green finance and emphasizing its relevance in the context of the architectural sector in Balkan countries. It underlines the benefits of green finance, such as economic growth, environmental conservation, and social well-being. Integrating green finance into architectural projects is important as a means to achieve sustainable development goals while promoting financial viability. Also, delves into the current state of green building practices in the Balkan countries and identifies the need for financial support to further drive adoption. It explores the existing regulatory frameworks and policies that promote sustainable architecture and discusses how green finance can complement these initiatives. Unique challenges faced by Balkan countries are highlighted, along with the potential opportunities that green finance presents in overcoming these challenges. We highlight successful sustainable architectural projects in the region to showcase the practical application of green finance in the Balkans. These projects exemplify the effective utilization of green finance mechanisms, resulting in tangible economic and environmental impacts, including job creation, energy efficiency, and reduced carbon emissions. The abstract concludes by identifying replicable models and lessons learned from these projects that can serve as a blueprint for future sustainable architecture initiatives in the Balkans. The importance of collaboration and knowledge sharing among stakeholders is emphasized. Engaging architects, financial institutions, governments, and local communities is crucial to promoting green finance in architecture. The abstract suggests the establishment of knowledge exchange platforms and regional/international networks to foster collaboration and facilitate the sharing of expertise among Balkan countries.

Keywords: sustainable finance, renewable energy, Balkan region, investment opportunities, green infrastructure, ESG criteria, architecture

Procedia PDF Downloads 55
678 Information and Communication Technology Skills of Finnish Students in Particular by Gender

Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen

Abstract:

Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.

Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education

Procedia PDF Downloads 126
677 Students' ExperiEnce Enhancement Through Simulaton. A Process Flow in Logistics and Transportation Field

Authors: Nizamuddin Zainuddin, Adam Mohd Saifudin, Ahmad Yusni Bahaudin, Mohd Hanizan Zalazilah, Roslan Jamaluddin

Abstract:

Students’ enhanced experience through simulation is a crucial factor that brings reality to the classroom. The enhanced experience is all about developing, enriching and applications of a generic process flow in the field of logistics and transportations. As educational technology has improved, the effective use of simulations has greatly increased to the point where simulations should be considered a valuable, mainstream pedagogical tool. Additionally, in this era of ongoing (some say never-ending) assessment, simulations offer a rich resource for objective measurement and comparisons. Simulation is not just another in the long line of passing fads (or short-term opportunities) in educational technology. It is rather a real key to helping our students understand the world. It is a way for students to acquire experience about how things and systems in the world behave and react, without actually touching them. In short, it is about interactive pretending. Simulation is all about representing the real world which includes grasping the complex issues and solving intricate problems. Therefore, it is crucial before stimulate the real process of inbound and outbound logistics and transportation a generic process flow shall be developed. The paper will be focusing on the validization of the process flow by looking at the inputs gains from the sample. The sampling of the study focuses on multi-national and local manufacturing companies, third party companies (3PL) and government agency, which are selected in Peninsular Malaysia. A simulation flow chart was proposed in the study that will be the generic flow in logistics and transportation. A qualitative approach was mainly conducted to gather data in the study. It was found out from the study that the systems used in the process of outbound and inbound are System Application Products (SAP) and Material Requirement Planning (MRP). Furthermore there were some companies using Enterprises Resources Planning (ERP) and Electronic Data Interchange (EDI) as part of the Suppliers Own Inventories (SOI) networking as a result of globalized business between one countries to another. Computerized documentations and transactions were all mandatory requirement by the Royal Custom and Excise Department. The generic process flow will be the basis of developing a simulation program that shall be used in the classroom with the objective of further enhanced the students’ learning experience. Thus it will contributes to the body of knowledge on the enrichment of the student’s employability and also shall be one of the way to train new workers in the logistics and transportation filed.

Keywords: enhancement, simulation, process flow, logistics, transportation

Procedia PDF Downloads 321
676 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 102
675 Histological Grade Concordance between Core Needle Biopsy and Corresponding Surgical Specimen in Breast Carcinoma

Authors: J. Szpor, K. Witczak, M. Storman, A. Orchel, D. Hodorowicz-Zaniewska, K. Okoń, A. Klimkowska

Abstract:

Core needle biopsy (CNB) is well established as an important diagnostic tool in diagnosing breast cancer and it is now considered the initial method of choice for diagnosing breast disease. In comparison to fine needle aspiration (FNA), CNB provides more architectural information allowing for the evaluation of prognostic and predictive factors for breast cancer, including histological grade—one of three prognostic factors used to calculate the Nottingham Prognostic Index. Several studies have previously described the concordance rate between CNB and surgical excision specimen in determination of histological grade (HG). The concordance rate previously ascribed to overall grade varies widely across literature, ranging from 59-91%. The aim of this study is to see how the data looks like in material at authors’ institution and are the results as compared to those described in previous literature. The study population included 157 women with a breast tumor who underwent a core needle biopsy for breast carcinoma and a subsequent surgical excision of the tumor. Both materials were evaluated for the determination of histological grade (scale from 1 to 3). HG was assessed only in core needle biopsies containing at least 10 well preserved HPF with invasive tumor. The degree of concordance between CNB and surgical excision specimen for the determination of tumor grade was assessed by Cohen’s kappa coefficient. The level of agreement between core needle biopsy and surgical resection specimen for overall histologic grading was 73% (113 of 155 cases). CNB correctly predicted the grade of the surgical excision specimen in 21 cases for grade 1 tumors (Kappa coefficient κ = 0.525 95% CI (0.3634; 0.6818), 52 cases for grade 2 (Kappa coefficient κ = 0.5652 95% CI (0.458; 0.667) and 40 cases for stage 3 tumors (Kappa coefficient κ = 0.6154 95% CI (0.4862; 0.7309). The highest level of agreement was observed in grade 3 malignancies. In 9 of 42 (21%) discordant cases, the grade was higher in the CNB than in the surgical excision. This composed 6% of the overall discordance. These results correspond to the noted in the literature, showing that underestimation occurs more frequently than overestimation. This study shows that authors’ institution’s histologic grading of CNBs and surgical excisions shows a fairly good correlation and is consistent with findings in previous reports. Despite the inevitable limitations of CNB, CNB is an effective method for diagnosing breast cancer and managing treatment options. Assessment of tumour grade by CNB is useful for the planning of treatment, so in authors’ opinion it is worthy to implement it in daily practice.

Keywords: breast cancer, concordance, core needle biopsy, histological grade

Procedia PDF Downloads 219
674 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 131
673 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 110
672 Relocating Migration for Higher Education: Analytical Account of Students' Perspective

Authors: Sumit Kumar

Abstract:

The present study aims to identify the factors responsible for the internal migration of students other than push & pull factors; associated with the source region and destination region, respectively, as classified in classical geography. But in this classification of factors responsible for the migration of students, an agency of individual and the family he/she belongs to, have not been recognized which has later become the centre of the argument for describing and analyzing migration in New Economic theory of migration and New Economics of labour migration respectively. In this backdrop, the present study aims to understand the agency of an individual and the family members regarding one’s migration for higher education. Therefore, this study draws upon New Economic theory of migration and New Economics of labour migration for identifying the agency of individual or family in the context of migration. Further, migration for higher education consists not only the decision to migrate but also where to migrate (location), which university, which college and which course to pursue, also. In order to understand the role of various individuals at various stage of student migration, present study seeks help from the social networking approach for migration which identifies the individuals who facilitate the process of migration by reducing negative externalities of migration through sharing information and various other sorts of help to the migrant. Furthermore, this study also aims to rank those individuals who have helped migrants at various stages of migration for higher education in taking a decision, along with the factors responsible for their migration on the basis of their perception. In order to fulfill the above mentioned objectives of this study, quantification of qualitative data (perception of respondents) has been done employing through frequency distribution analysis. Qualitative data has been collected at two levels but questionnaire survey was the tool for data collection at both the occasions. Twenty five students who have migrated to other state for the purpose of higher education have been approached for pre-questionnaire survey consisting open-ended questions while one hundred students belonging to the same clientele have been approached for questionnaire survey consisting close-ended questions. This study has identified social pressure, peer group pressure and parental pressure; variables not constituting push & pull factors, very important for students’ migration. They have been even assigned better ranked by the respondents than push factors. Further, self (migrant themselves) have been ranked followed by parents by the respondents when it comes to take various decisions attached with the process of migration. Therefore, it can be said without sounding cynical that there are other factors other than push & pull factors which do facilitate the process of migration for higher education not only at the level to migrate but also at other levels intrinsic to the process of migration for higher education.

Keywords: agency, migration for higher education, perception, push and pull factors

Procedia PDF Downloads 227
671 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 133
670 Human Resource Management Functions; Employee Performance; Professional Health Workers In Public District Hospitals

Authors: Benjamin Mugisha Bugingo

Abstract:

Healthcare staffhas been considered as asignificant pillar to the health care system. However, the contest of human resources for health in terms of the turnover of health workers in Uganda has been more distinct in the latest years. The objective of the paper, therefore, were to investigate the influence Role Human resource management functions in on employeeperformance of professional health workers in public district hospitals in Kampala. The study objectives were: to establish the effect of performance management function, financialincentives, non-financial incentives, participation, and involvement in the decision-making on the employee performance of professional health workers in public district hospitals in Kampala. The study was devised in the social exchange theory and the equity theory. This study adopted a descriptive research design using quantitative approaches. The study used a cross-sectional research design with a mixed-methods approach. With a population of 402 individuals, the study considered a sample of 252 respondents, including doctors, nurses, midwives, pharmacists, and dentists from 3 district hospitals. The study instruments entailed a questionnaire as a quantitative data collection tool and interviews and focus group discussions as qualitative data gathering tools. To analyze quantitative data, descriptive statistics were used to assess the perceived status of Human resource management functions and the magnitude of intentions to stay, and inferential statistics were used to show the effect of predictors on the outcome variable by plotting a multiple linear regression. Qualitative data were analyzed in themes and reported in narrative and verbatim quotes and were used to complement descriptive findings for a better understanding of the magnitude of the study variables. The findings of this study showed a significant and positive effect of performance management function, financialincentives, non-financial incentives, and participation and involvement in decision-making on employee performance of professional health workers in public district hospitals in Kampala. This study is expected to be a major contributor for the improvement of the health system in the country and other similar settings as it has provided the insights for strategic orientation in the area of human resources for health, especially for enhanced employee performance in relation with the integrated human resource management approach

Keywords: human resource functions, employee performance, employee wellness, profecial workers

Procedia PDF Downloads 81
669 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 424
668 Opportunities and Challenges: Tracing the Evolution of India's First State-led Curriculum-based Media Literacy Intervention

Authors: Ayush Aditya

Abstract:

In today's digitised world, the extent of an individual’s social involvement is largely determined by their interaction over the internet. The Internet has emerged as a primary source of information consumption and a reliable medium for receiving updates on everyday activities. Owing to this change in the information consumption pattern, the internet has also emerged as a hotbed of misinformation. Experts are of the view that media literacy has emerged as one of the most effective strategies for addressing the issue of misinformation. This paper aims to study the evolution of the Kerala government's media literacy policy, its implementation strategy, challenges and opportunities. The objective of this paper is to create a conceptual framework containing details of the implementation strategy based on the Kerala model. Extensive secondary research of literature, newspaper articles, and other online sources was carried out to locate the timeline of this policy. This was followed by semi-structured interview discussions with government officials from Kerala to trace the origin and evolution of this policy. Preliminary findings based on the collected data suggest that this policy is a case of policy by chance, as the officer who headed this policy during the state level implementation was the one who has already piloted a media literacy program in a district called Kannur as the district collector. Through this paper, an attempt is made to trace the history of the media literacy policy starting from the Kannur intervention in 2018, which was started to address the issue of vaccine hesitancy around measles rubella(MR) vaccination. If not for the vaccine hesitancy, this program would not have been rolled out in Kannur. Interviews with government officials suggest that when authorities decided to take up this initiative in 2020, a huge amount of misinformation emerging during the COVID-19 pandemic was the trigger. There was misinformation regarding government orders, healthcare facilities, vaccination, and lockdown regulations, which affected everyone, unlike the case of Kannur, where it was only a certain age group of kids. As a solution to this problem, the state government decided to create a media literacy curriculum to be taught in all government schools of the state starting from standard 8 till graduation. This was a tricky task, as a new course had to be immediately introduced in the school curriculum amid all the disruptions in the education system caused by the pandemic. It was revealed during the interview that in the case of the state-wide implementation, every step involved multiple checks and balances, unlike the earlier program where stakeholders were roped-in as and when the need emerged. On the pedagogy, while the training during the pilot could be managed through PowerPoint presentation, designing a state-wide curriculum involved multiple iterations and expert approvals. The reason for this is COVID-19 related misinformation has lost its significance. In the next phase of the research, an attempt will be made to compare other aspects of the pilot implementation with the state-wide implementation.

Keywords: media literacy, digital media literacy, curriculum based media literacy intervention, misinformation

Procedia PDF Downloads 76
667 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 164
666 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education

Authors: Eva Šmelová, Alena Berčíková

Abstract:

The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.

Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills

Procedia PDF Downloads 238
665 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples

Authors: Ahmed S. Fayed, Umima M. Mansour

Abstract:

Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.

Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol

Procedia PDF Downloads 211
664 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 72
663 Destruction of History and the Syrian Conflict: Upholding the Cultural Integrity of Dura Europos

Authors: Justine A. Lloyd

Abstract:

Since the onset of the Syrian Civil War in 2011, the ancient city of Dura-Europos has faced widespread destruction and looting. The site is one of many places in the country the terrorist group ISIS has specifically targeted, allegedly due to its particular representations of Syrian history and culture. However, looted art and artifacts are the extremist group’s second largest source of income, only after oil. The protection of this site is important to both academics and the millions who have called Syria a home, as it aids in the nation’s sense of identity, reveals developments in the arts, and contributes to humanity’s collective history. At a time when Syria’s culture is being flattened, this sense of cultural expression is especially important to maintain. Creating an awareness of the magnitude of the issue at hand begins with an examination of the rich history of the ancient fortress city. Located on the western bank of the Euphrates River, Dura-Europos contains artifacts dating back to the Hellenistic, Parthian, and Roman periods. Though a great deal of the art and artifacts have remained safe in institutions such as the National Museum of Damascus and the Yale University Art Gallery, hundreds of looting pits and use of heavy machinery on the site has severely set back the investigative progress made by archaeologists over the last century, as well as the prospect of future excavation. Further research draws on the current destruction of the site by both ISIS and opportunists involved with the black market. Because Dura-Europos is located in a war stricken region, the acquisition of data and possibility of immediate action is particularly challenging. Resources gained from local reports, in addition to technology such as satellite imagery, however, have provided a firm starting point for the evaluation of the state of the site. The Syrian Ministry of Culture, UNESCO, and numerous Syrian and global organizations provide insight into the historic city’s past, present issues, and future plans to ensure that the cultural integrity of the site is upheld. Though over seventy percent of Dura-Europos has been completely decimated, this research challenges the notion that physically destroyed sites are lost forever. This paper assesses preventative measures that can take place to ensure the preservation of the site’s art and architecture, including examining possible solutions to the damage, such as digital reconstruction, replication, and distribution of information through exhibitions and other forms of publically accessible information. In order to investigate any possible retribution, research also includes the necessary information pertaining the global laws and regulations dealing with cultural heritage, as it directly affects the ways in which this situation can be dealt with. With the countless experts and citizens dedicated to the importance of cultural heritage, the prospect of honoring and valuing elements of Dura-Europos is possible—whether physically preserved or otherwise.

Keywords: antiquities law, archaeological sites, restitution, Syrian Civil War

Procedia PDF Downloads 155
662 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 66
661 Inbreeding Study Using Runs of Homozygosity in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

The best linear unbiased predictor (BLUP) is a method commonly used in genetic evaluations of breeding programs. However, this approach can lead to higher inbreeding coefficients in the population due to the intensive use of few bulls with higher genetic potential, usually presenting some degree of relatedness. High levels of inbreeding are associated to low genetic viability, fertility, and performance for some economically important traits and therefore, should be constantly monitored. Unreliable pedigree data can also lead to misleading results. Genomic information (i.e., single nucleotide polymorphism – SNP) is a useful tool to estimate the inbreeding coefficient. Runs of homozygosity have been used to evaluate homozygous segments inherited due to direct or collateral inbreeding and allows inferring population selection history. This study aimed to evaluate runs of homozygosity (ROH) and inbreeding in a population of Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip and the quality control was carried out excluding SNPs located in non-autosomal regions, with unknown position, with a p-value in the Hardy-Weinberg equilibrium lower than 10⁻⁵, call rate lower than 0.98 and samples with the call rate lower than 0.90. After the quality control, 809 animals and 509,107 SNPs remained for analyses. For the ROH analysis, PLINK software was used considering segments with at least 50 SNPs with a minimum length of 1Mb in each animal. The inbreeding coefficient was calculated using the ratio between the sum of all ROH sizes and the size of the whole genome (2,548,724kb). A total of 25.711 ROH were observed, presenting mean, median, minimum, and maximum length of 3.34Mb, 2Mb, 1Mb, and 80.8Mb, respectively. The number of SNPs present in ROH segments varied from 50 to 14.954. The longest ROH length was observed in one animal, which presented a length of 634Mb (24.88% of the genome). Four bulls were among the 10 animals with the longest extension of ROH, presenting 11% of ROH with length higher than 10Mb. Segments longer than 10Mb indicate recent inbreeding. Therefore, the results indicate an intensive use of few sires in the studied data. The distribution of ROH along the chromosomes showed that chromosomes 5 and 6 presented a large number of segments when compared to other chromosomes. The mean, median, minimum, and maximum inbreeding coefficients were 5.84%, 5.40%, 0.00%, and 24.88%, respectively. Although the mean inbreeding was considered low, the ROH indicates a recent and intensive use of few sires, which should be avoided for the genetic progress of breed.

Keywords: autozygosity, Bos taurus indicus, genomic information, single nucleotide polymorphism

Procedia PDF Downloads 141
660 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients

Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff

Abstract:

Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.

Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)

Procedia PDF Downloads 333
659 Surface Modified Core–Shell Type Lipid–Polymer Hybrid Nanoparticles of Trans-Resveratrol, an Anticancer Agent, for Long Circulation and Improved Efficacy against MCF-7 Cells

Authors: M. R. Vijayakumar, K. Priyanka, Ramoji Kosuru, Lakshmi, Sanjay Singh

Abstract:

Trans resveratrol (RES) is a non-flavonoid poly-phenolic compound proved for its therapeutic and preventive effect against various types of cancer. However, the practical application of RES in cancer treatment is limited because of its higher dose (up to 7.5 g/day in humans), low biological half life, rapid metabolism and faster elimination in mammals. PEGylated core-shell type lipid polymer hybrid nanoparticles are the novel drug delivery systems for long circulation and improved anti cancer effect of its therapeutic payloads. Therefore, the main objective of this study is to extend the biological half life (long circulation) and improve the therapeutic efficacy of RES through core shell type of nanoparticles. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS), a novel surfactant is applied for the preparation of PEGylated lipid polymer hybrid nanoparticles. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Entrapment efficiency and invitro drug release were determined by ultracentrifugation method and dialysis bag method, respectively. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies after i.v administration were performed in sprague dawley rats. The prepared NPs were found to be spherical in shape with smooth surfaces. Particle size and zeta potential of prepared NPs were found to be in the range of 179.2±7.45 to 266.8±9.61 nm and -0.63 to -48.35 mV, respectively. DSC revealed absence of potential interaction. XRD study revealed presence of amorphous form in nanoparticles. Entrapment efficiency was found to be 83.7 % and drug release was found to be in controlled manner. MTT assay showed low MEC and pharmacokinetic studies showed higher AUC of nanoformulaition than its pristine drug. All these studies revealed that the RES loaded PEG modified core-shell type lipid polymer hybrid nanoparticles can be an alternative tool for chemopreventive and therapeutic application of RES in cancer.

Keywords: trans resveratrol, cancer nanotechnology, long circulating nanoparticles, bioavailability enhancement, core shell nanoparticles, lipid polymer hybrid nanoparticles

Procedia PDF Downloads 460
658 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 153
657 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 162
656 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model

Authors: Yang Chen, Lili Fu

Abstract:

Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.

Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China

Procedia PDF Downloads 91
655 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 70
654 Implementation of Nutritional Awareness Programme on Eating Habits of Primary School Children

Authors: Gulcin Satir, Ahmet Yildirim

Abstract:

Globally, including Turkey, health problems associated with malnutrition and nutrient deficiencies in childhood will remain major public health problems in future. Nutrition is a major environmental influence on physical and mental growth and development in early life. Many studies support the fact that nutritional knowledge makes contribution to wellbeing of children and their school performance. The purpose of this study was to examine nutritional knowledge and eating habits of primary school children and to investigate differences in these variables by socioeconomic status. A quasi-experimental one group pretest/posttest design study was conducted in five primary schools totaling 200 children aging 9-10 years in grade 4 to determine the effect of nutritional awareness programme on eating habits of primary school children. The schools were chosen according to parents’ social and demographic characteristics. The implemented nutritional awareness education programme focused on healthy lifestyle such as beneficial foods, eating habits, personal hygiene, physical activity and the programme consisted of eight lessons. The teaching approaches used included interactive teaching, role-playing, demonstration, small group discussions, questioning, and feedback. The lessons were given twice a week for four weeks totaling eight lessons. All lessons lasted 45-60 minutes and first 5 minutes of this was pre-assessment and last 5 minutes post assessment evaluation. The obtained data were analyzed for normality, and the distribution of the variables was tested by the Kolmogorov-Smirnov test. Paired t-test was used to evaluate the effectiveness of education programme and to compare the above-mentioned variables in each school separately before and after the lessons. The result of the paired t-test conducted separately for each school showed that on average after eight lessons, there was a 25-32% increase in nutritional knowledge of students regardless of the school they attend to and this rate was significant (P < 0.01). This shows that increase in nutritional awareness in these five schools having different socio-economic status was similar to each other. This study suggests that having children involved directly in lessons help to achieve nutritional awareness leading to healthy eating habits. It is concluded that nutritional awareness is a valuable tool to change eating habits. Study findings will provide information for developing nutrition education programmes for the healthy life and obesity prevention in children.

Keywords: children, nutritional awareness, obesity, socioeconomic status

Procedia PDF Downloads 127
653 Clinicians' and Nurses' Documentation Practices in Palliative and Hospice Care: A Mixed Methods Study Providing Evidence for Quality Improvement at Mobile Hospice Mbarara, Uganda

Authors: G. Natuhwera, M. Rabwoni, P. Ellis, A. Merriman

Abstract:

Aims: Health workers are likely to document patients’ care inaccurately, especially when using new and revised case tools, and this could negatively impact patient care. This study set out to; (1) assess nurses’ and clinicians’ documentation practices when using a new patients’ continuation case sheet (PCCS) and (2) explore nurses’ and clinicians’ experiences regarding documentation of patients’ information in the new PCCS. The purpose of introducing the PCCS was to improve continuity of care for patients attending clinics at which they were unlikely to see the same clinician or nurse consistently. Methods: This was a mixed methods study. The cross-sectional inquiry retrospectively reviewed 100 case notes of active patients on hospice and palliative care program. Data was collected using a structured questionnaire with constructs formulated from the new PCCS under study. The qualitative element was face-to-face audio-recorded, open-ended interviews with a purposive sample of one palliative care clinician, and four palliative care nurse specialists. Thematic analysis was used. Results: Missing patients’ biogeographic information was prevalent at 5-10%. Spiritual and psychosocial issues were not documented in 42.6%, and vital signs in 49.2%. Poorest documentation practices were observed in past medical history part of the PCCS at 40-63%. Four themes emerged from interviews with clinicians and nurses-; (1) what remains unclear and challenges, (2) comparing the past with the present, (3) experiential thoughts, and (4) transition and adapting to change. Conclusions: The PCCS seems to be a comprehensive and simple tool to be used to document patients’ information at subsequent visits. The comprehensiveness and utility of the PCCS does paper to be limited by the failure to train staff in its use prior to introducing. The authors find the PCCS comprehensive and suitable to capture patients’ information and recommend it can be adopted and used in other palliative and hospice care settings, if suitable introductory training accompanies its introduction. Otherwise, the reliability and validity of patients’ information collected by this PCCS can be significantly reduced if some sections therein are unclear to the clinicians/nurses. The study identified clinicians- and nurses-related pitfalls in documentation of patients’ care. Clinicians and nurses need to prioritize accurate and complete documentation of patient care in the PCCS for quality care provision. This study should be extended to other sites using similar tools to ensure representative and generalizable findings.

Keywords: documentation, information case sheet, palliative care, quality improvement

Procedia PDF Downloads 132
652 Chongqing, a Megalopolis Disconnected with Its Rivers: An Assessment of Urban-Waterside Disconnect in a Chinese Megacity and Proposed Improvement Strategies, Chongqing City as a Case Study

Authors: Jaime E. Salazar Lagos

Abstract:

Chongqing is located in southwest China and is becoming one of the most significant cities in the world. Its urban territories and metropolitan-related areas have one of the largest urban populations in China and are partitioned and shaped by two of the biggest and longest rivers on Earth, the Yangtze and Jialing Rivers, making Chongqing a megalopolis intersected by rivers. Historically, Chongqing City enjoyed fundamental connections with its rivers; however, current urban development of Chongqing City has lost effective integration of the riverbanks within the urban space and structural dynamics of the city. Therefore, there exists a critical lack of physical and urban space conjoined with the rivers, which diminishes the economic, tourist, and environmental development of Chongqing. Using multi-scale satellite-map site verification the study confirmed the hypothesis and urban-waterside disconnect. Collected data demonstrated that the Chongqing urban zone, an area of 5292 square-kilometers and a water front of 203.4 kilometers, has only 23.49 kilometers of extension (just 11.5%) with high-quality physical and spatial urban-waterside connection. Compared with other metropolises around the world, this figure represents a significant lack of spatial development along the rivers, an issue that has not been successfully addressed in the last 10 years of urban development. On a macro scale, the study categorized the different kinds of relationships between the city and its riverbanks. This data was then utilized in the creation of an urban-waterfront relationship map that can be a tool for future city planning decisions and real estate development. On a micro scale, we discovered there are three primary elements that are causing the urban-waterside disconnect: extensive highways along the most dense areas and city center, large private real estate developments that do not provide adequate riverside access, and large industrial complexes that almost completely lack riverside utilization. Finally, as part of the suggested strategies, the study concludes that the most efficient and practical way to improve this situation is to follow the historic master-planning of Chongqing and create connective nodes in critical urban locations along the river, a strategy that has been used for centuries to handle the same urban-waterside relationship. Reviewing and implementing this strategy will allow the city to better connect with the rivers, reducing the various impacts of disconnect and urban transformation.

Keywords: Chongqing City, megalopolis, nodes, riverbanks disconnection, urban

Procedia PDF Downloads 213
651 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 190