Search results for: numerical tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7184

Search results for: numerical tools

134 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 292
133 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth

Authors: Caroline Atef Shoukry Tadros

Abstract:

Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.

Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science

Procedia PDF Downloads 55
132 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients

Authors: Lise Paesen, Marielle Leijten

Abstract:

People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.

Keywords: Alzheimer's disease, experimental design, language decline, writing process

Procedia PDF Downloads 252
131 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 128
130 Complex Decision Rules in Quality Assurance Processes for Quick Service Restaurant Industry: Human Factors Determining Acceptability

Authors: Brandon Takahashi, Marielle Hanley, Gerry Hanley

Abstract:

The large-scale quick-service restaurant industry is a complex business to manage optimally. With over 40 suppliers providing different ingredients for food preparation and thousands of restaurants serving over 50 unique food offerings across a wide range of regions, the company must implement a quality assurance process. Businesses want to deliver quality food efficiently, reliably, and successfully at a low cost that the public wants to buy. They also want to make sure that their food offerings are never unsafe to eat or of poor quality. A good reputation (and profitable business) developed over the years can be gone in an instant if customers fall ill eating your food. Poor quality also results in food waste, and the cost of corrective actions is compounded by the reduction in revenue. Product compliance evaluation assesses if the supplier’s ingredients are within compliance with the specifications of several attributes (physical, chemical, organoleptic) that a company will test to ensure that a quality, safe to eat food is given to the consumer and will deliver the same eating experience in all parts of the country. The technical component of the evaluation includes the chemical and physical tests that produce numerical results that relate to shelf-life, food safety, and organoleptic qualities. The psychological component of the evaluation includes organoleptic, which is acting on or involving the use of the sense organs. The rubric for product compliance evaluation has four levels: (1) Ideal: Meeting or exceeding all technical (physical and chemical), organoleptic, & psychological specifications. (2) Deviation from ideal but no impact on quality: Not meeting or exceeding some technical and organoleptic/psychological specifications without impact on consumer quality and meeting all food safety requirements (3) Acceptable: Not meeting or exceeding some technical and organoleptic/psychological specifications resulting in reduction of consumer quality but not enough to lessen demand and meeting all food safety requirements (4) Unacceptable: Not meeting food safety requirements, independent of meeting technical and organoleptic specifications or meeting all food safety requirements but product quality results in consumer rejection of food offering. Sampling of products and consumer tastings within the distribution network is a second critical element of the quality assurance process and are the data sources for the statistical analyses. Each finding is not independently assessed with the rubric. For example, the chemical data will be used to back up/support any inferences on the sensory profiles of the ingredients. Certain flavor profiles may not be as apparent when mixed with other ingredients, which leads to weighing specifications differentially in the acceptability decision. Quality assurance processes are essential to achieve that balance of quality and profitability by making sure the food is safe and tastes good but identifying and remediating product quality issues before they hit the stores. Comprehensive quality assurance procedures implement human factors methodologies, and this report provides recommendations for systemic application of quality assurance processes for quick service restaurant services. This case study will review the complex decision rubric and evaluate processes to ensure the right balance of cost, quality, and safety is achieved.

Keywords: decision making, food safety, organoleptics, product compliance, quality assurance

Procedia PDF Downloads 167
129 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 17
128 Educational Audit and Curricular Reforms in the Arabian Context

Authors: Irum Naz

Abstract:

In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.

Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center

Procedia PDF Downloads 156
127 The Effects of Branding on Profitability of Banks in Ghana

Authors: Evans Oteng, Clement Yeboah, Alexander Otechere-Fianko

Abstract:

In today’s economy, despite achievements and advances in the banking and financial institutions, there are challenges that will require intensive attempts on the portion of the banks in Ghana. The perceived decline in profitability of banks seems to have emanated from ineffective branding. Hence, the purpose of this quantitative descriptive-correlational study was to examine the effects of branding on the profitability of banks in Ghana. The researchers purposively sampled some 116 banks in Ghana. Self-developed Likert scale questionnaires were administered to the finance officers of the financial institutions. The results were found to be statistically significant, F (1, 114) = 4. 50, p = .036. This indicates that those banks in Ghana with good branding practices have strong marketing tools to identify and sell their products and services and, as such, have a big market share. The correlation coefficients indicate that branding has a positive correlation with profitability and are statistically significant (r=.207, p<0.05), which signifies that as branding increases, the return on equity’s profitability indicator improves and vice versa. Future researchers can consider other factors beyond branding, such as online banking. The study has significant implications for the success and competitive advantage of those banks that effective branding allows them to differentiate themselves from their competitors. A strong and unique brand identity can help a bank stand out in a crowded market, attract customers, and build customer loyalty. This can lead to increased market share and profitability. Branding influences customer perception and trust. A well-established and reputable brand can create a positive image in the minds of customers, enhancing their confidence in the bank's products and services. This can result in increased customer acquisition, customer retention and a positive impact on profitability. Banks with strong brands can leverage their reputation and customer trust to cross-sell additional products and services. When customers have confidence in the brand, they are more likely to explore and purchase other offerings from the same institution. Cross-selling can boost revenue streams and profitability. Successful branding can open up opportunities for brand extensions and diversification into new products or markets. Banks can leverage their trusted brand to introduce new financial products or expand their presence into related areas, such as insurance or investment services. This can lead to additional revenue streams and improved profitability. This study can have implications for education. Thus, increased profitability of banks due to effective branding can result in higher financial resources available for corporate social responsibility (CSR) activities. Banks may invest in educational initiatives, such as scholarships, grants, research projects, and sponsorships, to support the education sector in Ghana. Also, this study can have implications for logistics and supply chain management. Thus, strong branding can create trust and credibility among customers, leading to increased customer loyalty. This loyalty can positively impact the bank's relationships with its suppliers and logistics partners. It can result in better negotiation power, improved supplier relationships, and enhanced supply chain coordination, ultimately leading to more efficient and cost-effective logistics operations.

Keywords: branding, profitability, competitors, customer loyalty, customer retention, corporate social responsibility, cost-effective, logistics operations

Procedia PDF Downloads 52
126 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 68
125 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 105
124 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 373
123 Construction Engineering and Cocoa Agriculture: A Synergistic Approach for Improved Livelihoods of Farmers

Authors: Felix Darko-Amoah, Daniel Acquah

Abstract:

In contemporary ecosystems for developing countries like Ghana, the need to explore innovative solutions for sustainable livelihoods of farmers is more important than ever. With Ghana’s population growing steadily and the demand for food, fiber and shelter increasing, it is imperative that the construction industry and agriculture come together to address the challenges faced by farmers in the country. In order to enhance the livelihoods of cocoa farmers in Ghana, this paper provides an innovative strategy that aims to integrate the areas of civil engineering and cash crop agriculture. This study focuses on cocoa cultivation in poorer nations, where farmers confront a variety of difficulties include restricted access to financing, subpar infrastructure, and insufficient support services. We seek to improve farmers' access to financing, improve infrastructure, and provide support services that are essential to their success by combining the fields of building engineering and cocoa production. The findings of the study are beneficial to cocoa producers, community extension agents, and construction engineers. In order to accomplish our objectives, we conducted 307 of field investigations in particular cocoa growing communities in the Western Region of Ghana. Several studies have shown that there is a lack of adequate infrastructure and financing, leading to low yields, subpar beans, and low farmer profitability in developing nations like Ghana. Our goal is to give farmers access to better infrastructure, better financing, and support services that are crucial to their success through the fusion of construction engineering and cocoa production. Based on data gathered from the field investigations, the results show that the employment of appropriate technology and methods for developing structures, roads, and other infrastructure in rural regions is one of the essential components of this strategy. For instance, we find that using affordable, environmentally friendly materials like bamboo, rammed earth, and mud bricks can assist to cut expenditures while also protecting the environment. By applying simple relational techniques to the data gathered, the results also show that construction engineers are crucial in planning and building infrastructure that is appropriate for the local environment and circumstances and resilient to natural disasters like floods. Thus, the convergence of construction engineering and cash crop cultivation is another crucial component of the agriculture-construction interplay. For instance, farmers can receive financial assistance to buy essential inputs, such as seeds, fertilizer, and tools, as well as training in proper farming methods. Moreover, extension services can be offered to assist farmers in marketing their crops and enhancing their livelihoods and revenue. In conclusion, our analysis of responses from the 307 participants depicts that the combination of construction engineering and cash crop agriculture offers an innovative approach to improving farmers' livelihoods in cocoa farming communities in Ghana. In conclusion, by inculcating the findings of this study into core decision-making, policymakers can help farmers build sustainable and profitable livelihoods by addressing challenges such as limited access to financing, poor infrastructure, and inadequate support services.

Keywords: cocoa agriculture, construction engineering, farm buildings and equipment, improved livelihoods of farmers

Procedia PDF Downloads 69
122 Pluripotent Stem Cells as Therapeutic Tools for Limbal Stem Cell Deficiencies and Drug Testing

Authors: Aberdam Edith, Sangari Linda, Petit Isabelle, Aberdam Daniel

Abstract:

Background and Rationale: Transparent avascularised cornea is essential for normal vision and depends on limbal stem cells (LSC) that reside between the cornea and the conjunctiva. Ocular burns or injuries may destroy the limbus, causing limbal stem cell deficiency (LSCD). The cornea becomes vascularised by invaded conjunctival cells, the stroma is scarring, resulting in corneal opacity and loss of vision. Grafted autologous limbus or cultivated autologous LCS can restore the vision, unless the two eyes are affected. Alternative cellular sources have been tested in the last decades, including oral mucosa or hair follicle epithelial cells. However, only partial success has been achieved by the use of these cells since they were not able to uniformly commit into corneal epithelial cells. Human pluripotent stem cells (iPSC) display both unlimited growth capacity and ability to differentiate into any cell type. Our goal was to design a standardized and reproducible protocol to produce transplantable autologous LSC from patients through cell reprogramming technology. Methodology: First, keratinocyte primary culture was established from a small number of plucked hair follicles of healthy donors. The resulting epithelial cells were reprogrammed into induced pluripotent stem cells (iPSCs) and further differentiate into corneal epithelial cells (CEC), according to a robust protocol that recapitulates the main step of corneal embryonic development. qRT-PCR analysis and immunofluorescent staining during the course of differentiation confirm the expression of stage specific markers of corneal embryonic lineage. First appear ectodermal progenitor-specific cytokeratins K8/K18, followed at day 7 by limbal-specific PAX6, TP63 and cytokeratins K5/K14. At day 15, K3/K12+-corneal cells are present. To amplify the iPSC-derived LSC (named COiPSC), intact small epithelial colonies were detached and cultivated in limbal cell-specific medium. In that culture conditions, the COiPSC can be frozen and thaw at any passage, while retaining their corneal characteristics for at least eight passages. To evaluate the potential of COiPSC as an alternative ocular toxicity model, COiPSC were treated at passage P0 to P4 with increasing amounts of SDS and Benzalkonium. Cell proliferation and apoptosis of treated cells was compared to LSC and the SV40-immortalized human corneal epithelial cell line (HCE) routinely used by cosmetological industrials. Of note, HCE are more resistant to toxicity than LSC. At P0, COiPSC were systematically more resistant to chemical toxicity than LSC and even to HCE. Remarkably, this behavior changed with passage since COiPSC at P2 became identical to LSC and thus closer to physiology than HCE. Comparative transcriptome analysis confirmed that COiPSC from P2 are similar to a mixture of LSC and CEC. Finally, by organotypic reconstitution assay, we demonstrated the ability of COiPSC to produce a 3D corneal epithelium on a stromal equivalent made of keratocytes. Conclusion: COiPSC could become valuable for two main applications: (1) an alternative robust tool to perform, in a reproducible and physiological manner, toxicity assays for cosmetic products and pharmacological tests of drugs. (2). COiPSC could become an alternative autologous source for cornea transplantation for LSCD.

Keywords: Limbal stem cell deficiency, iPSC, cornea, limbal stem cells

Procedia PDF Downloads 387
121 The Istrian Istrovenetian-Croatian Bilingual Corpus

Authors: Nada Poropat Jeletic, Gordana Hrzica

Abstract:

Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.

Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology

Procedia PDF Downloads 116
120 Health Equity in Hard-to-Reach Rural Communities in Abia State, Nigeria: An Asset-Based Community Development Intervention to Influence Community Norms and Address the Social Determinants of Health in Hard-to-Reach Rural Communities

Authors: Chinasa U. Imo, Queen Chikwendu, Jonathan Ajuma, Mario Banuelos

Abstract:

Background: Sociocultural norms primarily influence the health-seeking behavior of populations in rural communities. In the Nkporo community, Abia State, Nigeria, their sociocultural perception of diseases runs counter to biomedical definitions, wherein they rely heavily on traditional medicine and practices. In a state where birth asphyxia and sepsis account for the significant causes of death for neonates, malaria leads to the causes of other mortalities, followed by common preventable diseases such as diarrhea, pneumonia, acute respiratory tract infection, malnutrition, and HIV/AIDS. Most local mothers attribute their health conditions and that of their children to witchcraft attacks, the hand of God, and ancestral underlining. This influences how they see antenatal and postnatal care, choice of place of accessing care and birth delivery, response to children's illnesses, immunization, and nutrition. Method: To implement a community health improvement program, we adopted an asset-based community development model to address health's normative and social determinants. The first step was to use a qualitative approach to conduct a community health needs baseline assessment, involving focus group discussions with twenty-five (25) youths aged 18-25, semi-structured interviews with ten (10) officers-in-charge of primary health centers, eight (8) ward health committee members, and nine (9) community leaders. Secondly, we designed an intervention program. Going forward, we will proceed with implementing and evaluating this program. Result: The priority needs identified by the communities were malaria, lack of clean drinking water, and the need for behavioral change information. The study also highlighted the significant influence of youths on their peers, family, and community as caregivers and information interpreters. Based on the findings, the NGO SieDi-Hub collaborated with the Abia State Ministry of Health, the State Primary Healthcare Agency, and Empower Next Generations to design a one-year "Community Health Youth Champions Pilot Program." Twenty (20) youths in the community were trained and equipped to champion a participatory approach to bridging the gap between access and delivery of primary healthcare, to adjust sociocultural norms to improve health equity for people in Nkporo community – with limited education, lack of access to health information, and quality healthcare facilities using an innovative community-led improvement approach. Conclusion: Youths play a vital role in achieving health equity, being a vulnerable population with significant influence. To ensure effective primary healthcare, strategies must include cultural humility. The asset-based community development model offers valuable tools, and this article will share ongoing lessons from the intervention's behavioral change strategies with young people.

Keywords: asset-based community development, community health, primary health systems strengthening, youth empowerment

Procedia PDF Downloads 52
119 Using AI Based Software as an Assessment Aid for University Engineering Assignments

Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth

Abstract:

As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.

Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)

Procedia PDF Downloads 106
118 Spatial Assessment of Creek Habitats of Marine Fish Stock in Sindh Province

Authors: Syed Jamil H. Kazmi, Faiza Sarwar

Abstract:

The Indus delta of Sindh Province forms the largest creeks zone of Pakistan. The Sindh coast starts from the mouth of Hab River and terminates at Sir Creek area. In this paper, we have considered the major creeks from the site of Bin Qasim Port in Karachi to Jetty of Keti Bunder in Thatta District. A general decline in the mangrove forest has been observed that within a span of last 25 years. The unprecedented human interventions damage the creeks habitat badly which includes haphazard urban development, industrial and sewage disposal, illegal cutting of mangroves forest, reduced and inconsistent fresh water flow mainly from Jhang and Indus rivers. These activities not only harm the creeks habitat but affected the fish stock substantially. Fishing is the main livelihood of coastal people but with the above-mentioned threats, it is also under enormous pressure by fish catches resulted in unchecked overutilization of the fish resources. This pressure is almost unbearable when it joins with deleterious fishing methods, uncontrolled fleet size, increase trash and by-catch of juvenile and illegal mesh size. Along with these anthropogenic interventions study area is under the red zone of tropical cyclones and active seismicity causing floods, sea intrusion, damage mangroves forests and devastation of fish stock. In order to sustain the natural resources of the Indus Creeks, this study was initiated with the support of FAO, WWF and NIO, the main purpose was to develop a Geo-Spatial dataset for fish stock assessment. The study has been spread over a year (2013-14) on monthly basis which mainly includes detailed fish stock survey, water analysis and few other environmental analyses. Environmental analysis also includes the habitat classification of study area which has done through remote sensing techniques for 22 years’ time series (1992-2014). Furthermore, out of 252 species collected, fifteen species from estuarine and marine groups were short-listed to measure the weight, health and growth of fish species at each creek under GIS data through SPSS system. Furthermore, habitat suitability analysis has been conducted by assessing the surface topographic and aspect derivation through different GIS techniques. The output variables then overlaid in GIS system to measure the creeks productivity. Which provided the results in terms of subsequent classes: extremely productive, highly productive, productive, moderately productive and less productive. This study has revealed the Geospatial tools utilization along with the evaluation of the fisheries resources and creeks habitat risk zone mapping. It has also been identified that the geo-spatial technologies are highly beneficial to identify the areas of high environmental risk in Sindh Creeks. This has been clearly discovered from this study that creeks with high rugosity are more productive than the creeks with low levels of rugosity. The study area has the immense potential to boost the economy of Pakistan in terms of fish export, if geo-spatial techniques are implemented instead of conventional techniques.

Keywords: fish stock, geo-spatial, productivity analysis, risk

Procedia PDF Downloads 223
117 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 57
116 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field

Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian

Abstract:

Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.

Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering

Procedia PDF Downloads 162
115 Prospective Museum Visitor Management Based on Prospect Theory: A Pragmatic Approach

Authors: Athina Thanou, Eirini Eleni Tsiropoulou, Symeon Papavassiliou

Abstract:

The problem of museum visitor experience and congestion management – in various forms - has come increasingly under the spotlight over the last few years, since overcrowding can significantly decrease the quality of visitors’ experience. Evidence suggests that on busy days the amount of time a visitor spends inside a crowded house museum can fall by up to 60% compared to a quiet mid-week day. In this paper we consider the aforementioned problem, by treating museums as evolving social systems that induce constraints. However, in a cultural heritage space, as opposed to the majority of social environments, the momentum of the experience is primarily controlled by the visitor himself. Visitors typically behave selfishly regarding the maximization of their own Quality of Experience (QoE) - commonly expressed through a utility function that takes several parameters into consideration, with crowd density and waiting/visiting time being among the key ones. In such a setting, congestion occurs when either the utility of one visitor decreases due to the behavior of other persons, or when costs of undertaking an activity rise due to the presence of other persons. We initially investigate how visitors’ behavioral risk attitudes, as captured and represented by prospect theory, affect their decisions in resource sharing settings, where visitors’ decisions and experiences are strongly interdependent. Different from the majority of existing studies and literature, we highlight that visitors are not risk neutral utility maximizers, but they demonstrate risk-aware behavior according to their personal risk characteristics. In our work, exhibits are organized into two groups: a) “safe exhibits” that correspond to less congested ones, where the visitors receive guaranteed satisfaction in accordance with the visiting time invested, and b) common pool of resources (CPR) exhibits, which are the most popular exhibits with possibly increased congestion and uncertain outcome in terms of visitor satisfaction. A key difference is that the visitor satisfaction due to CPR strongly depends not only on the invested time decision of a specific visitor, but also on that of the rest of the visitors. In the latter case, the over-investment in time, or equivalently the increased congestion potentially leads to “exhibit failure”, interpreted as the visitors gain no satisfaction from their observation of this exhibit due to high congestion. We present a framework where each visitor in a distributed manner determines his time investment in safe or CPR exhibits to optimize his QoE. Based on this framework, we analyze and evaluate how visitors, acting as prospect-theoretic decision-makers, respond and react to the various pricing policies imposed by the museum curators. Based on detailed evaluation results and experiments, we present interesting observations, regarding the impact of several parameters and characteristics such as visitor heterogeneity and use of alternative pricing policies, on scalability, user satisfaction, museum capacity, resource fragility, and operation point stability. Furthermore, we study and present the effectiveness of alternative pricing mechanisms, when used as implicit tools, to deal with the congestion management problem in the museums, and potentially decrease the exhibit failure probability (fragility), while considering the visitor risk preferences.

Keywords: museum resource and visitor management, congestion management, propsect theory, cyber physical social systems

Procedia PDF Downloads 160
114 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 156
113 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter

Authors: Wesley Preßler, Lucie Schmidt

Abstract:

Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.

Keywords: digital transformation, interdisciplinary, maturity model, neighborhood

Procedia PDF Downloads 51
112 Development and Evaluation of a Cognitive Behavioural Therapy Based Smartphone App for Low Moods and Anxiety

Authors: David Bakker, Nikki Rickard

Abstract:

Smartphone apps hold immense potential as mental health and wellbeing tools. Support can be made easily accessible and can be used in real-time while users are experiencing distress. Furthermore, data can be collected to enable machine learning and automated tailoring of support to users. While many apps have been developed for mental health purposes, few have adhered to evidence-based recommendations and even fewer have pursued experimental validation. This paper details the development and experimental evaluation of an app, MoodMission, that aims to provide support for low moods and anxiety, help prevent clinical depression and anxiety disorders, and serve as an adjunct to professional clinical supports. MoodMission was designed to deliver cognitive behavioural therapy for specifically reported problems in real-time, momentary interactions. Users report their low moods or anxious feelings to the app along with a subjective units of distress scale (SUDS) rating. MoodMission then provides a choice of 5-10 short, evidence-based mental health strategies called Missions. Users choose a Mission, complete it, and report their distress again. Automated tailoring, gamification, and in-built data collection for analysis of effectiveness was also included in the app’s design. The development process involved construction of an evidence-based behavioural plan, designing of the app, building and testing procedures, feedback-informed changes, and a public launch. A randomized controlled trial (RCT) was conducted comparing MoodMission to two other apps and a waitlist control condition. Participants completed measures of anxiety, depression, well-being, emotional self-awareness, coping self-efficacy and mental health literacy at the start of their app use and 30 days later. At the time of submission (November 2016) over 300 participants have participated in the RCT. Data analysis will begin in January 2017. At the time of this submission, MoodMission has over 4000 users. A repeated-measures ANOVA of 1390 completed Missions reveals that SUDS (0-10) ratings were significantly reduced between pre-Mission ratings (M=6.20, SD=2.39) and post-Mission ratings (M=4.93, SD=2.25), F(1,1389)=585.86, p < .001, np2=.30. This effect was consistent across both low moods and anxiety. Preliminary analyses of the data from the outcome measures surveys reveal improvements across mental health and wellbeing measures as a result of using the app over 30 days. This includes a significant increase in coping self-efficacy, F(1,22)=5.91, p=.024, np2=.21. Complete results from the RCT in which MoodMission was evaluated will be presented. Results will also be presented from the continuous outcome data being recorded by MoodMission. MoodMission was successfully developed and launched, and preliminary analysis suggest that it is an effective mental health and wellbeing tool. In addition to the clinical applications of MoodMission, the app holds promise as a research tool to conduct component analysis of psychological therapies and overcome restraints of laboratory based studies. The support provided by the app is discrete, tailored, evidence-based, and transcends barriers of stigma, geographic isolation, financial limitations, and low health literacy.

Keywords: anxiety, app, CBT, cognitive behavioural therapy, depression, eHealth, mission, mobile, mood, MoodMission

Procedia PDF Downloads 251
111 Integrated Services Hub for Exploration and Production Industry: An Indian Narrative

Authors: Sunil Arora, Anitya Kumar Jena, S. A. Ravi

Abstract:

India is at the cusp of major reforms in the hydrocarbon sector. Oil and gas sector is highly liberalised to attract private investment and to increase domestic production. Major hydrocarbon Exploration & Production (E&P) activity here have been undertaken by Government owned companies but with easing up and reworking of hydro carbon exploration licensing policies private players have also joined the fray towards achieving energy security for India. Government of India has come up with policy and administrative reforms including Hydrocarbon Exploration and Licensing Policy (HELP), Sagarmala (port-led development with coastal connectivity), and Development of Small Discovered Fields, etc. with the intention to make industry friendly conditions for investment, ease of doing business and reduce gestation period. To harness the potential resources of Deep water and Ultra deep water, High Pressure – High Temperature (HP-HT) regions, Coal Bed Methane (CBM), Shale Hydrocarbons besides Gas Hydrates, participation shall be required from both domestic and international players. Companies engaged in E&P activities in India have traditionally been managing through their captive supply base, but with crude prices under hammer, the need is being felt to outsource non-core activities. This necessitates establishment of a robust support services to cater to E&P Industry, which is currently non-existent to meet the bourgeon challenges. This paper outlines an agenda for creating an Integrated Services Hub (ISH) under Special Economic Zone (SEZ) to facilitate complete gamut of non-core support activities of E&P industry. This responsive and proficient multi-usage facility becomes viable with better resource utilization, economies of scale to offer cost effective services. The concept envisages companies to bring-in their core technical expertise leaving complete hardware peripherals outsourced to this ISH. The Integrated Services Hub, complying with the best in class global standards, shall typically provide following Services under Single Window Solution, but not limited to: a) Logistics including supply base operations, transport of manpower and material, helicopters, offshore supply vessels, warehousing, inventory management, sourcing and procurement activities, international freight forwarding, domestic trucking, customs clearance service etc. b) Trained/Experienced pool of competent Manpower (Technical, Security etc.) will be available for engagement by companies on either short or long term basis depending upon the requirements with provisions of meeting any training requirements. c) Specialized Services through tie-up with global best companies for Crisis Management, Mud/Cement, Fishing, Floating Dry-dock besides provision of Workshop, Repair and Testing facilities, etc. d) Tools and Tackles including drill strings, etc. A pre-established Integrated Services Hub shall facilitate an early start-up of activities with substantial savings in time lines. This model can be replicated at other parts of the world to expedite E&P activities.

Keywords: integrated service hub, India, oil gas, offshore supply base

Procedia PDF Downloads 128
110 Engineering Design of a Chemical Launcher: An Interdisciplinary Design Activity

Authors: Mei Xuan Tan, Gim-Yang Maggie Pee, Mei Chee Tan

Abstract:

Academic performance, in the form of scoring high grades in enrolled subjects, is not the only significant trait in achieving success. Engineering graduates with experience in working on hands-on projects in a team setting are highly sought after in industry upon graduation. Such projects are typically real world problems that require the integration and application of knowledge and skills from several disciplines. In a traditional university setting, subjects are taught in a silo manner with no cross participation from other departments or disciplines. This may lead to knowledge compartmentalization and students are unable to understand and connect the relevance and applicability of the subject. University instructors thus see this integration across disciplines as a challenging task as they aim to better prepare students in understanding and solving problems for work or future studies. To improve students’ academic performance and to cultivate various skills such as critical thinking, there has been a gradual uptake in the use of an active learning approach in introductory science and engineering courses, where lecturing is traditionally the main mode of instruction. This study aims to discuss the implementation and experience of a hands-on, interdisciplinary project that involves all the four core subjects taught during the term at the Singapore University of Technology Design (SUTD). At SUTD, an interdisciplinary design activity, named 2D, is integrated into the curriculum to help students reinforce the concepts learnt. A student enrolled in SUTD experiences his or her first 2D in Term 1. This activity. which spans over one week in Week 10 of Term 1, highlights the application of chemistry, physics, mathematics, humanities, arts and social sciences (HASS) in designing an engineering product solution. The activity theme for Term 1 2D revolved around “work and play”. Students, in teams of 4 or 5, used a scaled-down model of a chemical launcher to launch a projectile across the room. It involved the use of a small chemical combustion reaction between ethanol (a highly volatile fuel) and oxygen. This reaction generated a sudden and large increase in gas pressure built up in a closed chamber, resulting in rapid gas expansion and ejection of the projectile out of the launcher. Students discussed and explored the meaning of play in their lives in HASS class while the engineering aspects of a combustion system to launch an object using underlying principles of energy conversion and projectile motion were revisited during the chemistry and physics classes, respectively. Numerical solutions on the distance travelled by the projectile launched by the chemical launcher, taking into account drag forces, was developed during the mathematics classes. At the end of the activity, students developed skills in report writing, data collection and analysis. Specific to this 2D activity, students gained an understanding and appreciation on the application and interdisciplinary nature of science, engineering and HASS. More importantly, students were exposed to design and problem solving, where human interaction and discussion are important yet challenging in a team setting.

Keywords: active learning, collaborative learning, first year undergraduate, interdisciplinary, STEAM

Procedia PDF Downloads 103
109 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 18
108 Gas-Phase Noncovalent Functionalization of Pristine Single-Walled Carbon Nanotubes with 3D Metal(II) Phthalocyanines

Authors: Vladimir A. Basiuk, Laura J. Flores-Sanchez, Victor Meza-Laguna, Jose O. Flores-Flores, Lauro Bucio-Galindo, Elena V. Basiuk

Abstract:

Noncovalent nanohybrid materials combining carbon nanotubes (CNTs) with phthalocyanines (Pcs) is a subject of increasing research effort, with a particular emphasis on the design of new heterogeneous catalysts, efficient organic photovoltaic cells, lithium batteries, gas sensors, field effect transistors, among other possible applications. The possibility of using unsubstituted Pcs for CNT functionalization is very attractive due to their very moderate cost and easy commercial availability. However, unfortunately, the deposition of unsubstituted Pcs onto nanotube sidewalls through the traditional liquid-phase protocols turns to be very problematic due to extremely poor solubility of Pcs. On the other hand, unsubstituted free-base H₂Pc phthalocyanine ligand, as well as many of its transition metal complexes, exhibit very high thermal stability and considerable volatility under reduced pressure, which opens the possibility for their physical vapor deposition onto solid surfaces, including nanotube sidewalls. In the present work, we show the possibility of simple, fast and efficient noncovalent functionalization of single-walled carbon nanotubes (SWNTs) with a series of 3d metal(II) phthalocyanines Me(II)Pc, where Me= Co, Ni, Cu, and Zn. The functionalization can be performed in a temperature range of 400-500 °C under moderate vacuum and requires about 2-3 h only. The functionalized materials obtained were characterized by means of Fourier-transform infrared (FTIR), Raman, UV-visible and energy-dispersive X-ray spectroscopy (EDS), scanning and transmission electron microscopy (SEM and TEM, respectively) and thermogravimetric analysis (TGA). TGA suggested that Me(II)Pc weight content is 30%, 17% and 35% for NiPc, CuPc, and ZnPc, respectively (CoPc exhibited anomalous thermal decomposition behavior). The above values are consistent with those estimated from EDS spectra, namely, of 24-39%, 27-36% and 27-44% for CoPc, CuPc, and ZnPc, respectively. A strong increase in intensity of D band in the Raman spectra of SWNT‒Me(II)Pc hybrids, as compared to that of pristine nanotubes, implies very strong interactions between Pc molecules and SWNT sidewalls. Very high absolute values of binding energies of 32.46-37.12 kcal/mol and the highest occupied and lowest unoccupied molecular orbital (HOMO and LUMO, respectively) distribution patterns, calculated with density functional theory by using Perdew-Burke-Ernzerhof general gradient approximation correlation functional in combination with the Grimme’s empirical dispersion correction (PBE-D) and the double numerical basis set (DNP), also suggested that the interactions between Me(II) phthalocyanines and nanotube sidewalls are very strong. The authors thank the National Autonomous University of Mexico (grant DGAPA-IN200516) and the National Council of Science and Technology of Mexico (CONACYT, grant 250655) for financial support. The authors are also grateful to Dr. Natalia Alzate-Carvajal (CCADET of UNAM), Eréndira Martínez (IF of UNAM) and Iván Puente-Lee (Faculty of Chemistry of UNAM) for technical assistance with FTIR, TGA measurements, and TEM imaging, respectively.

Keywords: carbon nanotubes, functionalization, gas-phase, metal(II) phthalocyanines

Procedia PDF Downloads 102
107 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 223
106 Urban Dynamics Modelling of Mixed Land Use for Sustainable Urban Development in Indian Context

Authors: Rewati Raman, Uttam K. Roy

Abstract:

One of the main adversaries of city planning in present times is the ever-expanding problem of urbanization and the antagonistic issues accompanying it. The prevalent challenges in urbanization such as population growth, urban sprawl, poverty, inequality, pollution, congestion, etc. call for reforms in the urban fabric as well as in planning theory and practice. One of the various paradigms of city planning, land use planning, has been the major instruments for spatial planning of cities and regions in India. Zoning regulation based land use planning in the form of land use and development control plans (LUDCP) and development control regulations (DCR) have been considered mainstream guiding principles in land use planning for decades. In spite of many advantages of such zoning based regulations, over a period of time, it has been critiqued by scholars for its own limitations of isolation and lack of vitality, inconvenience in business in terms of proximity to residence and low operating cost, unsuitable environment for small investments, higher travel distance for facilities, amenities and thereby higher expenditure, safety issues etc. Mixed land use has been advocated as a tool to avoid such limitations in city planning by researchers. In addition, mixed land use can offer many advantages like housing variety and density, the creation of an economic blend of compatible land use, compact development, stronger neighborhood character, walkability, and generation of jobs, etc. Alternatively, the mixed land use beyond a suitable balance of use can also bring disadvantages like traffic congestion, encroachments, very high-density housing leading to a slum like condition, parking spill out, non-residential uses operating on residential premises paying less tax, chaos hampering residential privacy, pressure on existing infrastructure facilities, etc. This research aims at studying and outlining the various challenges and potentials of mixed land use zoning, through modeling tools, as a competent instrument for city planning in lieu of the present urban scenario. The methodology of research adopted in this paper involves the study of a mixed land use neighborhood in India, identification of indicators and parameters related to its extent and spatial pattern and the subsequent use of system dynamics as a modeling tool for simulation. The findings from this analysis helped in identifying the various advantages and challenges associated with the dynamic nature of a mixed use urban settlement. The results also confirmed the hypothesis that mixed use neighborhoods are catalysts for employment generation, socioeconomic gains while improving vibrancy, health, safety, and security. It is also seen that certain challenges related to chaos, lack of privacy and pollution prevail in mixed use neighborhoods, which can be mitigated by varying the percentage of mixing as per need, ensuring compatibility of adjoining use, institutional interventions in the form of policies, neighborhood micro-climatic interventions, etc. Therefore this paper gives a consolidated and holistic framework and quantified outcome pertaining to the extent and spatial pattern of mixed land use that should be adopted to ensure sustainable urban planning.

Keywords: mixed land use, sustainable development, system dynamics analysis, urban dynamics modelling

Procedia PDF Downloads 157
105 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 156