Search results for: foreign real estate investment
121 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material
Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe
Abstract:
For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus
Procedia PDF Downloads 274120 Global Winners versus Local Losers: Globalization Identity and Tradition in Spanish Club Football
Authors: Jim O'brien
Abstract:
Contemporary global representation and consumption of La Liga across a plethora of media platform outlets has resulted in significant implications for the historical, political and cultural developments which shaped the development of Spanish club football. This has established and reinforced a hierarchy of a small number of teams belonging to or aspiring to belong to a cluster of global elite clubs seeking to imitate the blueprint of the English Premier League in respect of corporate branding and marketing in order to secure a global fan base through success and exposure in La Liga itself and through the Champions League. The synthesis between globalization, global sport and the status of high profile clubs has created radical change within the folkloric iconography of Spanish football. The main focus of this paper is to critically evaluate the consequences of globalization on the rich tapestry at the core of the game’s distinctive history in Spain. The seminal debate underpinning the study considers whether the divergent aspects of globalization have acted as a malevolent force, eroding tradition, causing financial meltdown and reducing much of the fabric of club football to the status of by standers, or have promoted a renaissance of these traditions, securing their legacies through new fans and audiences. The study draws on extensive sources on the history, politics and culture of Spanish football, in both English and Spanish. It also uses primary and archive material derived from interviews and fieldwork undertaken with scholars, media professionals and club representatives in Spain. The paper has four main themes. Firstly, it contextualizes the key historical, political and cultural forces which shaped the landscape of Spanish football from the late nineteenth century. The seminal notions of region, locality and cultural divergence are pivotal to this discourse. The study then considers the relationship between football, ethnicity and identity as a barometer of continuity and change, suggesting that tradition is being reinvented and re-framed to reflect the shifting demographic and societal patterns within the Spanish state. Following on from this, consideration is given to the paradoxical function of ‘El Clasico’ and the dominant duopoly of the FC Barcelona – Real Madrid axis in both eroding tradition in the global nexus of football’s commodification and in protecting historic political rivalries. To most global consumers of La Liga, the mega- spectacle and hyperbole of ‘El Clasico’ is the essence of Spanish football, with cultural misrepresentation and distortion catapulting the event to the global media audience. Finally, the paper examines La Liga as a sporting phenomenon in which elite clubs, cult managers and galacticos serve as commodities on the altar of mass consumption in football’s global entertainment matrix. These processes accentuate a homogenous mosaic of cultural conformity which obscures local, regional and national identities and paradoxically fuses the global with the local to maintain the distinctive hue of La Liga, as witnessed by the extraordinary successes of Athletico Madrid and FC Eibar in recent seasons.Keywords: Spanish football, globalization, cultural identity, tradition, folklore
Procedia PDF Downloads 301119 Possible Involvement of DNA-methyltransferase and Histone Deacetylase in the Regulation of Virulence Potential of Acanthamoeba castellanii
Authors: Yi H. Wong, Li L. Chan, Chee O. Leong, Stephen Ambu, Joon W. Mak, Priyadashi S. Sahu
Abstract:
Background: Acanthamoeba is a free-living opportunistic protist which is ubiquitously distributed in the environment. Virulent Acanthamoeba can cause fatal encephalitis in immunocompromised patients and potential blinding keratitis in immunocompetent contact lens wearers. Approximately 24 species have been identified but only the A. castellanii, A. polyphaga and A. culbertsoni are commonly associated with human infections. Until to date, the precise molecular basis for Acanthamoeba pathogenesis remains unclear. Previous studies reported that Acanthamoeba virulence can be diminished through prolonged axenic culture but revived through serial mouse passages. As no clear explanation on this reversible pathogenesis is established, hereby, we postulate that the epigenetic regulators, DNA-methyltransferases (DNMT) and histone-deacetylases (HDAC), could possibly be involved in granting the virulence plasticity of Acanthamoeba spp. Methods: Four rounds of mouse passages were conducted to revive the virulence potential of the virulence-attenuated Acanthamoeba castellanii strain (ATCC 50492). Briefly, each mouse (n=6/group) was inoculated intraperitoneally with Acanthamoebae cells (2x 105 trophozoites/mouse) and incubated for 2 months. Acanthamoebae cells were isolated from infected mouse organs by culture method and subjected to subsequent mouse passage. In vitro cytopathic, encystment and gelatinolytic assays were conducted to evaluate the virulence characteristics of Acanthamoebae isolates for each passage. PCR primers which targeted on the 2 members (DNMT1 and DNMT2) and 5 members (HDAC1 to 5) of the DNMT and HDAC gene families respectively were custom designed. Quantitative real-time PCR (qPCR) was performed to detect and quantify the relative expression of the two gene families in each Acanthamoeba isolates. Beta-tubulin of A. castellanii (Genbank accession no: XP_004353728) was included as housekeeping gene for data normalisation. PCR mixtures were also analyzed by electrophoresis for amplicons detection. All statistical analyses were performed using the paired one-tailed Student’s t test. Results: Our pathogenicity tests showed that the virulence-reactivated Acanthamoeba had a higher degree of cytopathic effect on vero cells, a better resistance to encystment challenge and a higher gelatinolytic activity which was catalysed by serine protease. qPCR assay showed that DNMT1 expression was significantly higher in the virulence-reactivated compared to the virulence-attenuated Acanthamoeba strain (p ≤ 0.01). The specificity of primers which targeted on DNMT1 was confirmed by sequence analysis of PCR amplicons, which showed a 97% similarity to the published DNA-methyltransferase gene of A. castellanii (GenBank accession no: XM_004332804.1). Out of the five primer pairs which targeted on the HDAC family genes, only HDAC4 expression was significantly difference between the two variant strains. In contrast to DNMT1, HDAC4 expression was much higher in the virulence-attenuated Acanthamoeba strain. Conclusion: Our mouse passages had successfully restored the virulence of the attenuated strain. Our findings suggested that DNA-methyltransferase (DNMT1) and histone deacetylase (HDAC4) expressions are associated with virulence potential of Acanthamoeba spp.Keywords: acanthamoeba, DNA-methyltransferase, histone deacetylase, virulence-associated proteins
Procedia PDF Downloads 289118 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 73117 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth
Authors: Caroline Atef Shoukry Tadros
Abstract:
Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science
Procedia PDF Downloads 73116 Improving Junior Doctor Induction Through the Use of Simple In-House Mobile Application
Authors: Dmitriy Chernov, Maria Karavassilis, Suhyoun Youn, Amna Izhar, Devasenan Devendra
Abstract:
Introduction and Background: A well-structured and comprehensive departmental induction improves patient safety and job satisfaction amongst doctors. The aims of our Project were as follows: 1. Assess the perceived preparedness of junior doctors starting their rotation in Acute Medicine at Watford General Hospital. 2. Develop a supplemental Induction Guide and Pocket reference in the form of an iOS mobile application. 3. To collect feedback after implementing the mobile application following a trial period of 8 weeks with a small cohort of junior doctors. Materials and Methods: A questionnaire was distributed to all new junior trainees starting in the department of Acute Medicine to assess their experience of current induction. A mobile Induction application was developed and trialled over a period of 8 weeks, distributed in addition to the existing didactic induction session. After the trial period, the same questionnaire was distributed to assess improvement in induction experience. Analytics data were collected with users’ consent to gauge user engagement and identify areas of improvement of the application. A feedback survey about the app was also distributed. Results: A total of 32 doctors used the application during the 8-week trial period. The application was accessed 7259 times in total, with the average user spending a cumulative of 37 minutes 22 seconds on the app. The most used section was Clinical Guidelines, accessed 1490 times. The App Feedback survey revealed positive reviews: 100% of participants (n=15/15) responded that the app improved their overall induction experience compared to other placements; 93% (n=14/15) responded that the app improved overall efficiency in completing daily ward jobs compared to previous rotations; and 93% (n=14/15) responded that the app improved patient safety overall. In the Pre-App and Post-App Induction Surveys, participants reported: a 48% improvement in awareness of practical aspects of the job; a 26% improvement of awareness on locating pathways and clinical guidelines; a 40% reduction of feelings of overwhelmingness. Conclusions and recommendations: This study demonstrates the importance of technology in Medical Education and Clinical Induction. The mobile application average engagement time equates to over 20 cumulative hours of on-the-job training delivered to each user, within an 8-week period. The most used and referred to section was clinical guidelines. This shows that there is high demand for an accessible pocket guide for this type of material. This simple mobile application resulted in a significant improvement in feedback about induction in our Department of Acute Medicine, and will likely impact workplace satisfaction. Limitations of the application include: post-app surveys had a small number of participants; the app is currently only available for iPhone users; some useful sections are nested deep within the app, lacks deep search functionality across all sections; lacks real time user feedback; and requires regular review and updates. Future steps for the app include: developing a web app, with an admin dashboard to simplify uploading and editing content; a comprehensive search functionality; and a user feedback and peer ratings system.Keywords: mobile app, doctor induction, medical education, acute medicine
Procedia PDF Downloads 86115 Force Sensor for Robotic Graspers in Minimally Invasive Surgery
Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy
Abstract:
Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor
Procedia PDF Downloads 230114 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 92113 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 67112 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 104111 Climate Change Adaptation Success in a Low Income Country Setting, Bangladesh
Authors: Tanveer Ahmed Choudhury
Abstract:
Background: Bangladesh is one of the largest deltas in the world, with high population density and high rates of poverty and illiteracy. 80% of the country is on low-lying floodplains, leaving the country one of the most vulnerable to the adverse effects of climate change: sea level rise, cyclones and storms, salinity intrusion, rising temperatures and heavy monsoon downpours. Such climatic events already limit Economic Development in the country. Although Bangladesh has had little responsibility in contributing to global climatic change, it is vulnerable to both its direct and indirect impacts. Real threats include reduced agricultural production, worsening food security, increased incidence of flooding and drought, spreading disease and an increased risk of conflict over scarce land and water resources. Currently, 8.3 million Bangladeshis live in cyclone high risk areas. However, by 2050 this is expected to grow to 20.3 million people, if proper adaptive actions are not taken. Under a high emissions scenario, an additional 7.6 million people will be exposed to very high salinity by 2050 compared to current levels. It is also projected that, an average of 7.2 million people will be affected by flooding due to sea level rise every year between 2070-2100 and If global emissions decrease rapidly and adaptation interventions are taken, the population affected by flooding could be limited to only about 14,000 people. To combat the climate change adverse effects, Bangladesh government has initiated many adaptive measures specially in infrastructure and renewable energy sector. Government is investing huge money and initiated many projects which have been proved very success full. Objectives: The objective of this paper is to describe some successful measures initiated by Bangladesh government in its effort to make the country a Climate Resilient. Methodology: Review of operation plan and activities of different relevant Ministries of Bangladesh government. Result: The following initiative projects, programs and activities are considered as best practices for Climate Change adaptation successes for Bangladesh: 1. The Infrastructure Development Company Limited (IDCOL); 2. Climate Change and Health Promotion Unit (CCHPU); 3. The Climate Change Trust Fund (CCTF); 4. Community Climate Change Project (CCCP); 5. Health, Population, Nutrition Sector Development Program (HPNSDP, 2011-2016)- "Climate Change and Environmental Issues"; 6. Ministry of Health and Family Welfare, Bangladesh and WHO Collaboration; - National Adaptation Plan. -"Building adaptation to climate change in health in least developed countries through resilient WASH". 7. COP-21 “Climate and health country profile -2015 Bangladesh. Conclusion: Due to a vast coastline, low-lying land and abundance of rivers, Bangladesh is highly vulnerable to climate change. Having extensive experience with facing natural disasters, Bangladesh has developed a successful adaptation program, which led to a significant reduction in casualties from extreme weather events. In a low income country setting, Bangladesh had successfully adapted various projects and initiatives to combat future Climate Change challenges.Keywords: climate, change, success, Bangladesh
Procedia PDF Downloads 249110 Factors Influencing Consumer Adoption of Digital Banking Apps in the UK
Authors: Sevelina Ndlovu
Abstract:
Financial Technology (fintech) advancement is recognised as one of the most transformational innovations in the financial industry. Fintech has given rise to internet-only digital banking, a novel financial technology advancement, and innovation that allows banking services through internet applications with no need for physical branches. This technology is becoming a new banking normal among consumers for its ubiquitous and real-time access advantages. There is evident switching and migration from traditional banking towards these fintech facilities, which could possibly pose a systemic risk if not properly understood and monitored. Fintech advancement has also brought about the emergence and escalation of financial technology consumption themes such as trust, security, perceived risk, and sustainability within the banking industry, themes scarcely covered in existing theoretic literature. To that end, the objective of this research is to investigate factors that determine fintech adoption and propose an integrated adoption model. This study aims to establish what the significant drivers of adoption are and develop a conceptual model that integrates technological, behavioral, and environmental constructs by extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). It proposes integrating constructs that influence financial consumption themes such as trust, perceived risk, security, financial incentives, micro-investing opportunities, and environmental consciousness to determine the impact of these factors on the adoption and intention to use digital banking apps. The main advantage of this conceptual model is the consolidation of a greater number of predictor variables that can provide a fuller explanation of the consumer's adoption of digital banking Apps. Moderating variables of age, gender, and income are incorporated. To the best of author’s knowledge, this study is the first that extends the UTAUT2 model with this combination of constructs to investigate user’s intention to adopt internet-only digital banking apps in the UK context. By investigating factors that are not included in the existing theories but are highly pertinent to the adoption of internet-only banking services, this research adds to existing knowledge and extends the generalisability of the UTAUT2 in a financial services adoption context. This is something that fills a gap in knowledge, as highlighted to needing further research on UTAUT2 after reviewing the theory in 2016 from its original version of 2003. To achieve the objectives of this study, this research assumes a quantitative research approach to empirically test the hypotheses derived from existing literature and pilot studies to give statistical support to generalise the research findings for further possible applications in theory and practice. This research is explanatory or casual in nature and uses cross-section primary data collected through a survey method. Convenient and purposive sampling using structured self-administered online questionnaires is used for data collection. The proposed model is tested using Structural Equation Modelling (SEM), and the analysis of primary data collected through an online survey is processed using Smart PLS software with a sample size of 386 digital bank users. The results are expected to establish if there are significant relationships between the dependent and independent variables and establish what the most influencing factors are.Keywords: banking applications, digital banking, financial technology, technology adoption, UTAUT2
Procedia PDF Downloads 72109 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 89108 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 336107 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 90106 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos
Authors: Thilini M. Yatanwala
Abstract:
CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection
Procedia PDF Downloads 182105 Genome-Scale Analysis of Streptomyces Caatingaensis CMAA 1322 Metabolism, a New Abiotic Stress-Tolerant Actinomycete
Authors: Suikinai Nobre Santos, Ranko Gacesa, Paul F. Long, Itamar Soares de Melo
Abstract:
Extremophilic microorganism are adapted to biotopes combining several stress factors (temperature, pressure, radiation, salinity and pH), which indicate the richness valuable resource for the exploitation of novel biotechnological processes and constitute unique models for investigations their biomolecules (1, 2). The above information encourages us investigate bioprospecting synthesized compounds by a noval actinomycete, designated thermotolerant Streptomyces caatingaensis CMAA 1322, isolated from sample soil tropical dry forest (Caatinga) in the Brazilian semiarid region (3-17°S and 35-45°W). This set of constrating physical and climatic factores provide the unique conditions and a diversity of well adapted species, interesting site for biotechnological purposes. Preliminary studies have shown the great potential in the production of cytotoxic, pesticidal and antimicrobial molecules (3). Thus, to extend knowledge of the genes clusters responsible for producing biosynthetic pathways of natural products in strain CMAA1322, whole-genome shotgun (WGS) DNA sequencing was performed using paired-end long sequencing with PacBio RS (Pacific Biosciences). Genomic DNA was extracted from a pure culture grown overnight on LB medium using the PureLink genomic DNA kit (Life Technologies). An approximately 3- to 20-kb-insert PacBio library was constructed and sequenced on an 8 single-molecule real-time (SMRT) cell, yielding 116,269 reads (average length, 7,446 bp), which were allocated into 18 contigs, with 142.11x coverage and N50 value of 20.548 bp (BioProject number PRJNA288757). The assembled data were analyzed by Rapid Annotations using Subsystems Technology (RAST) (4) the genome size was found to be 7.055.077 bp, comprising 6167 open reading frames (ORFs) and 413 subsystems. The G+C content was estimated to be 72 mol%. The closest-neighbors tool, available in RAST through functional comparison of the genome, revealed that strain CMAA1322 is more closely related to Streptomyces hygroscopicus ATCC 53653 (similarity score value, 537), S. violaceusniger Tu 4113 (score value, 483), S. avermitilis MA-4680 (score value, 475), S. albus J1074 (score value, 447). The Streptomyces sp. CMAA1322 genome contains 98 tRNA genes and 135 genes copies related to stress response, mainly osmotic stress (14), heat shock (16), oxidative stress (49). Functional annotation by antiSMASH version 3.0 (5) identified 41 clusters for secondary metabolites (including two clusters for lanthipeptides, ten clusters for nonribosomal peptide synthetases [NRPS], three clusters for siderophores, fourteen for polyketide synthetase [PKS], six clusters encoding a terpene, two clusters encoding a bacteriocin, and one cluster encoding a phenazine). Our work provide in comparative analyse of genome and extract produced (data no published) by lineage CMAA1322, revealing the potential of microorganisms accessed from extreme environments as Caatinga” to produce a wide range of biotechnological relevant compounds.Keywords: caatinga, streptomyces, environmental stresses, biosynthetic pathways
Procedia PDF Downloads 242104 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement
Authors: Tracey Woods
Abstract:
In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders
Procedia PDF Downloads 72103 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 128102 Illness-Related PTSD Among Type 1 Diabetes Patients
Authors: Omer Zvi Shaked, Amir Tirosh
Abstract:
Type 1 Diabetes (T1DM) is an incurable chronic illness with no known preventive measures. Excess to insulin therapy can lead to hypoglycemia with neuro-glycogenic symptoms such as shakiness, nausea, sweating, irritability, fatigue, excessive thirst or hunger, weakness, seizure, and coma. Severe Hypoglycemia (SH) is also considered a most aversive event since it may put patients at risk for injury and death, which matches the criteria of a traumatic event. SH has a ranging prevalence of 20%, which makes it a primary medical Issue. One of the results of SH is an intense emotional fear reaction resembling the form of post-traumatic stress symptoms (PTS), causing many patients to avoid insulin therapy and social activities in order to avoid the possibility of hypoglycemia. As a result, they are at risk for irreversible health deterioration and medical complications. Fear of Hypoglycemia (FOH) is, therefore, a major disturbance for T1DM patients. FOH differs from prevalent post-traumatic stress reactions to other forms of traumatic events since the threat to life continuously exists in the patient's body. That is, it is highly probable that orthodox interventions may not be sufficient for helping patients after SH to regain healthy social function and proper medical treatment. Accordingly, the current presentation will demonstrate the results of a study conducted among T1DM patients after SH. The study was designed in two stages. First, a preliminary qualitative phenomenological study among ten patients after SH was conducted. Analysis revealed that after SH, patients confuse between stress symptoms and Hypoglycemia symptoms, divide life before and after the event, report a constant sense of fear, a loss of freedom, a significant decrease in social functioning, a catastrophic thinking pattern, a dichotomous split between the self and the body, and internalization of illness identity, a loss of internal locus of control, a damaged self-representation, and severe loneliness for never being understood by others. The second stage was a two steps study of intervention among five patients after SH. The first part of the intervention included three months of therapeutic 3rd wave CBT therapy. The contents of the therapeutic process were: acceptance of fear and tolerance to stress; cognitive de-fusion combined with emotional self-regulation; the adoption of an active position relying on personal values; and self-compassion. Then, the intervention included a one-week practical real-time 24/7 support by trained medical personnel, alongside a gradual exposure to increased insulin therapy in a protected environment. The results of the intervention are a decrease in stress symptoms, increased social functioning, increased well-being, and decreased avoidance of medical treatment. The presentation will discuss the unique emotional state of T1DM patients after SH. Then, the presentation will discuss the effectiveness of the intervention for patients with chronic conditions after a traumatic event. The presentation will make evident the unique situation of illness-related PTSD. The presentation will also demonstrate the requirement for multi-professional collaboration between social work and medical care for populations with chronic medical conditions. Limitations of the study and recommendations for further research will be discussed.Keywords: type 1 diabetes, chronic illness, post-traumatic stress, illness-related PTSD
Procedia PDF Downloads 177101 Overlaps and Intersections: An Alternative Look at Choreography
Authors: Ashlie Latiolais
Abstract:
Architecture, as a discipline, is on a trajectory of extension beyond the boundaries of buildings and, more increasingly, is coupled with research that connects to alternative and typically disjointed disciplines. A “both/and” approach and (expanded) definition of architecture, as depicted here, expands the margins that contain the profession. Figuratively, architecture is a series of edges, events, and occurrences that establishes a choreography or stage by which humanity exists. The way in which architecture controls and suggests the movement through these spaces, being within a landscape, city, or building, can be viewed as a datum by which the “dance” of everyday life occurs. This submission views the realm of architecture through the lens of movement and dance as a cross-fertilizer of collaboration, tectonic, and spatial geometry investigations. “Designing on digital programs puts architects at a distance from the spaces they imagine. While this has obvious advantages, it also means that they lose the lived, embodied experience of feeling what is needed in space—meaning that some design ideas that work in theory ultimately fail in practice.” By studying the body in motion through real-time performance, a more holistic understanding of architectural space surfaces and new prospects for theoretical teaching pedagogies emerge. The atypical intersection rethinks how architecture is considered, created, and tested, similar to how “dance artists often do this by thinking through the body, opening pathways and possibilities that might not otherwise be accessible” –this is the essence of this poster submission as explained through unFOLDED, a creative performance work. A new languageismaterialized through unFOLDED, a dynamic occupiable installation by which architecture is investigated through dance, movement, and body analysis. The entry unfolds a collaboration of an architect, dance choreographer, musicians, video artist, and lighting designers to re-create one of the first documented avant-garde performing arts collaborations (Matisse, Satie, Massine, Picasso) from the Ballet Russes in 1917, entitled Parade. Architecturally, this interdisciplinary project orients and suggests motion through structure, tectonic, lightness, darkness, and shadow as it questions the navigation of the dark space (stage) surrounding the installation. Artificial light via theatrical lighting and video graphics brought the blank canvas to life – where the sensitive mix of musicality coordinated with the structure’s movement sequencing was certainly a challenge. The upstage light from the video projections created both flickered contextual imagery and shadowed figures. When the dancers were either upstage or downstage of the structure, both silhouetted figures and revealed bodies are experienced as dancer-controlled installation manipulations occurred throughout the performance. The experimental performance, through structure, prompted moving (dancing) bodies in space, where the architecture served as a key component to the choreography itself. The tectonic of the delicate steel structure allowed for the dancers to interact with the installation, which created a variety of spatial conditions – the contained box of three-dimensional space, to a wall, and various abstracted geometries in between. The development of this research unveils the new role of an Architect as a Choreographer of the built environment.Keywords: dance, architecture, choreography, installation, architect, choreographer, space
Procedia PDF Downloads 91100 Advancing Early Intervention Strategies for United States Adolescents and Young Adults with Schizophrenia in the Post-COVID-19 Era
Authors: Peggy M. Randon, Lisa Randon
Abstract:
Introduction: The post-COVID-19 era has presented unique challenges for addressing complex mental health issues, particularly due to exacerbated stress, increased social isolation, and disrupted continuity of care. This article outlines relevant health disparities and policy implications within the context of the United States while maintaining international relevance. Methods: A comprehensive literature review (including studies, reports, and policy documents) was conducted to examine concerns related to childhood-onset schizophrenia and the impact on patients and their families. Qualitative and quantitative data were synthesized to provide insights into the complex etiology of schizophrenia, the effects of the pandemic, and the challenges faced by socioeconomically disadvantaged populations. Case studies were employed to illustrate real-world examples and areas requiring policy reform. Results: Early intervention in childhood is crucial for preventing or mitigating the long-term impact of complex psychotic disorders, particularly schizophrenia. A comprehensive understanding of the genetic, environmental, and physiological factors contributing to the development of schizophrenia is essential. The COVID-19 pandemic worsened symptoms and disrupted treatment for many adolescent patients with schizophrenia, emphasizing the need for adaptive interventions and the utilization of virtual platforms. Health disparities, including stigma, financial constraints, and language or cultural barriers, further limit access to care, especially for socioeconomically disadvantaged populations. Policy implications: Current US health policies inadequately support patients with schizophrenia. The limited availability of longitudinal care, insufficient resources for families, and stigmatization represent ongoing policy challenges. Addressing these issues necessitates increased research funding, improved access to affordable treatment plans, and cultural competency training for healthcare providers. Public awareness campaigns are crucial to promote knowledge, awareness, and acceptance of mental health disorders. Conclusion: The unique challenges faced by children and families in the US affected by schizophrenia and other psychotic disorders have yet to be adequately addressed on institutional and systemic levels. The relevance of findings to an international audience is emphasized by examining the complex factors contributing to the onset of psychotic disorders and their global policy implications. The broad impact of the COVID-19 pandemic on mental health underscores the need for adaptive interventions and global responses. Addressing policy challenges, improving access to care, and reducing the stigma associated with mental health disorders are crucial steps toward enhancing the lives of adolescents and young adults with schizophrenia and their family members. The implementation of virtual platforms can help overcome barriers and ensure equitable access to support and resources for all patients, enabling them to lead healthy and fulfilling lives.Keywords: childhood, schizophrenia, policy, United, States, health, disparities
Procedia PDF Downloads 7699 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems
Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele
Abstract:
The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab
Procedia PDF Downloads 18398 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective
Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh
Abstract:
The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain
Procedia PDF Downloads 11197 Engineering Design of a Chemical Launcher: An Interdisciplinary Design Activity
Authors: Mei Xuan Tan, Gim-Yang Maggie Pee, Mei Chee Tan
Abstract:
Academic performance, in the form of scoring high grades in enrolled subjects, is not the only significant trait in achieving success. Engineering graduates with experience in working on hands-on projects in a team setting are highly sought after in industry upon graduation. Such projects are typically real world problems that require the integration and application of knowledge and skills from several disciplines. In a traditional university setting, subjects are taught in a silo manner with no cross participation from other departments or disciplines. This may lead to knowledge compartmentalization and students are unable to understand and connect the relevance and applicability of the subject. University instructors thus see this integration across disciplines as a challenging task as they aim to better prepare students in understanding and solving problems for work or future studies. To improve students’ academic performance and to cultivate various skills such as critical thinking, there has been a gradual uptake in the use of an active learning approach in introductory science and engineering courses, where lecturing is traditionally the main mode of instruction. This study aims to discuss the implementation and experience of a hands-on, interdisciplinary project that involves all the four core subjects taught during the term at the Singapore University of Technology Design (SUTD). At SUTD, an interdisciplinary design activity, named 2D, is integrated into the curriculum to help students reinforce the concepts learnt. A student enrolled in SUTD experiences his or her first 2D in Term 1. This activity. which spans over one week in Week 10 of Term 1, highlights the application of chemistry, physics, mathematics, humanities, arts and social sciences (HASS) in designing an engineering product solution. The activity theme for Term 1 2D revolved around “work and play”. Students, in teams of 4 or 5, used a scaled-down model of a chemical launcher to launch a projectile across the room. It involved the use of a small chemical combustion reaction between ethanol (a highly volatile fuel) and oxygen. This reaction generated a sudden and large increase in gas pressure built up in a closed chamber, resulting in rapid gas expansion and ejection of the projectile out of the launcher. Students discussed and explored the meaning of play in their lives in HASS class while the engineering aspects of a combustion system to launch an object using underlying principles of energy conversion and projectile motion were revisited during the chemistry and physics classes, respectively. Numerical solutions on the distance travelled by the projectile launched by the chemical launcher, taking into account drag forces, was developed during the mathematics classes. At the end of the activity, students developed skills in report writing, data collection and analysis. Specific to this 2D activity, students gained an understanding and appreciation on the application and interdisciplinary nature of science, engineering and HASS. More importantly, students were exposed to design and problem solving, where human interaction and discussion are important yet challenging in a team setting.Keywords: active learning, collaborative learning, first year undergraduate, interdisciplinary, STEAM
Procedia PDF Downloads 12296 Superhydrophobic Materials: A Promising Way to Enhance Resilience of Electric System
Authors: M. Balordi, G. Santucci de Magistris, F. Pini, P. Marcacci
Abstract:
The increasing of extreme meteorological events represents the most important causes of damages and blackouts of the whole electric system. In particular, the icing on ground-wires and overheads lines, due to snowstorms or harsh winter conditions, very often gives rise to the collapse of cables and towers both in cold and warm climates. On the other hand, the high concentration of contaminants in the air, due to natural and/or antropic causes, is reflected in high levels of pollutants layered on glass and ceramic insulators, causing frequent and unpredictable flashover events. Overheads line and insulator failures lead to blackouts, dangerous and expensive maintenances and serious inefficiencies in the distribution service. Inducing superhydrophobic (SHP) properties to conductors, ground-wires and insulators, is one of the ways to face all these problems. Indeed, in some cases, the SHP surface can delay the ice nucleation time and decrease the ice nucleation temperature, preventing ice formation. Besides, thanks to the low surface energy, the adhesion force between ice and a superhydrophobic material are low and the ice can be easily detached from the surface. Moreover, it is well known that superhydrophobic surfaces can have self-cleaning properties: these hinder the deposition of pollution and decrease the probability of flashover phenomena. Here this study presents three different studies to impart superhydrophobicity to aluminum, zinc and glass specimens, which represent the main constituent materials of conductors, ground-wires and insulators, respectively. The route to impart the superhydrophobicity to the metallic surfaces can be summarized in a three-step process: 1) sandblasting treatment, 2) chemical-hydrothermal treatment and 3) coating deposition. The first step is required to create a micro-roughness. In the chemical-hydrothermal treatment a nano-scale metallic oxide (Al or Zn) is grown and, together with the sandblasting treatment, bring about a hierarchical micro-nano structure. By coating an alchilated or fluorinated siloxane coating, the surface energy decreases and gives rise to superhydrophobic surfaces. In order to functionalize the glass, different superhydrophobic powders, obtained by a sol-gel synthesis, were prepared. Further, the specimens were covered with a commercial primer and the powders were deposed on them. All the resulting metallic and glass surfaces showed a noticeable superhydrophobic behavior with a very high water contact angles (>150°) and a very low roll-off angles (<5°). The three optimized processes are fast, cheap and safe, and can be easily replicated on industrial scales. The anti-icing and self-cleaning properties of the surfaces were assessed with several indoor lab-tests that evidenced remarkable anti-icing properties and self-cleaning behavior with respect to the bare materials. Finally, to evaluate the anti-snow properties of the samples, some SHP specimens were exposed under real snow-fall events in the RSE outdoor test-facility located in Vinadio, western Alps: the coated samples delay the formation of the snow-sleeves and facilitate the detachment of the snow. The good results for both indoor and outdoor tests make these materials promising for further development in large scale applications.Keywords: superhydrophobic coatings, anti-icing, self-cleaning, anti-snow, overheads lines
Procedia PDF Downloads 18395 Optimized Processing of Neural Sensory Information with Unwanted Artifacts
Authors: John Lachapelle
Abstract:
Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors
Procedia PDF Downloads 33094 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 24693 The Regulation of the Cancer Epigenetic Landscape Lies in the Realm of the Long Non-coding RNAs
Authors: Ricardo Alberto Chiong Zevallos, Eduardo Moraes Rego Reis
Abstract:
Pancreatic adenocarcinoma (PDAC) patients have a less than 10% 5-year survival rate. PDAC has no defined diagnostic and prognostic biomarkers. Gemcitabine is the first-line drug in PDAC and several other cancers. Long non-coding RNAs (lncRNAs) contribute to the tumorigenesis and are potential biomarkers for PDAC. Although lncRNAs aren’t translated into proteins, they have important functions. LncRNAs can decoy or recruit proteins from the epigenetic machinery, act as microRNA sponges, participate in protein translocation through different cellular compartments, and even promote chemoresistance. The chromatin remodeling enzyme EZH2 is a histone methyltransferase that catalyzes the methylation of histone 3 at lysine 27, silencing local expression. EZH2 is ambivalent, it can also activate gene expression independently of its histone methyltransferase activity. EZH2 is overexpressed in several cancers and interacts with lncRNAs, being recruited to a specific locus. EZH2 can be recruited to activate an oncogene or silence a tumor suppressor. The lncRNAs misregulation in cancer can result in the differential recruitment of EZH2 and in a distinct epigenetic landscape, promoting chemoresistance. The relevance of the EZH2-lncRNAs interaction to chemoresistant PDAC was assessed by Real Time quantitative PCR (RT-qPCR) and RNA Immunoprecipitation (RIP) experiments with naïve and gemcitabine-resistant PDAC cells. The expression of several lncRNAs and EZH2 gene targets was evaluated contrasting naïve and resistant cells. Selection of candidate genes was made by bioinformatic analysis and literature curation. Indeed, the resistant cell line showed higher expression of chemoresistant-associated lncRNAs and protein coding genes. RIP detected lncRNAs interacting with EZH2 with varying intensity levels in the cell lines. During RIP, the nuclear fraction of the cells was incubated with an antibody for EZH2 and with magnetic beads. The RNA precipitated with the beads-antibody-EZH2 complex was isolated and reverse transcribed. The presence of candidate lncRNAs was detected by RT-qPCR, and the enrichment was calculated relative to INPUT (total lysate control sample collected before RIP). The enrichment levels varied across the several lncRNAs and cell lines. The EZH2-lncRNA interaction might be responsible for the regulation of chemoresistance-associated genes in multiple cancers. The relevance of the lncRNA-EZH2 interaction to PDAC was assessed by siRNA knockdown of a lncRNA, followed by the analysis of the EZH2 target expression by RT-qPCR. The chromatin immunoprecipitation (ChIP) of EZH2 and H3K27me3 followed by RT-qPCR with primers for EZH2 targets also assess the specificity of the EZH2 recruitment by the lncRNA. This is the first report of the interaction of EZH2 and lncRNAs HOTTIP and PVT1 in chemoresistant PDAC. HOTTIP and PVT1 were described as promoting chemoresistance in several cancers, but the role of EZH2 is not clarified. For the first time, the lncRNA LINC01133 was detected in a chemoresistant cancer. The interaction of EZH2 with LINC02577, LINC00920, LINC00941, and LINC01559 have never been reported in any context. The novel lncRNAs-EZH2 interactions regulate chemoresistant-associated genes in PDAC and might be relevant to other cancers. Therapies targeting EZH2 alone weren’t successful, and a combinatorial approach also targeting the lncRNAs interacting with it might be key to overcome chemoresistance in several cancers.Keywords: epigenetics, chemoresistance, long non-coding RNAs, pancreatic cancer, histone modification
Procedia PDF Downloads 9692 Intercultural Initiatives and Canadian Bilingualism
Authors: Muna Shafiq
Abstract:
Growth in international immigration is a reflection of increased migration patterns in Canada and in other parts of the world. Canada continues to promote itself as a bilingual country, yet the bilingual French and English population numbers do not reflect this platform. Each province’s integration policies focus only on second language learning of either English or French. Moreover, since English Canadians outnumber French Canadians, maintaining, much less increasing, English-French bilingualism appears unrealistic. One solution to increasing Canadian bilingualism requires creating intercultural communication initiatives between youth in Quebec and the rest of Canada. Specifically, the focus is on active, experiential learning, where intercultural competencies develop outside traditional classroom settings. The target groups are Generation Y Millennials and Generation Z Linksters, the next generations in the career and parenthood lines. Today, Canada’s education system, like many others, must continually renegotiate lines between programs it offers its immigrant and native communities. While some purists or right-wing nationalists would disagree, the survival of bilingualism in Canada has little to do with reducing immigration. Children and youth immigrants play a valuable role in increasing Canada’s French and English speaking communities. For instance, a focus on more immersion, over core French education programs for immigrant children and youth would not only increase bilingual rates; it would develop meaningful intercultural attachments between Canadians. Moreover, a vigilant increase of funding in French immersion programs is critical, as are new initiatives that focus on experiential language learning for students in French and English language programs. A favorable argument supports the premise that other than French-speaking students in Québec and elsewhere in Canada, second and third generation immigrant students are excellent ambassadors to promote bilingualism in Canada. Most already speak another language at home and understand the value of speaking more than one language in their adopted communities. Their dialogue and participation in experiential language exchange workshops are necessary. If the proposed exchanges take place inter-provincially, the momentum to increase collective regional voices increases. This regional collectivity can unite Canadians differently than nation-targeted initiatives. The results from an experiential youth exchange organized in 2017 between students at the crossroads of Generation Y and Generation Z in Vancouver and Quebec City respectively offer a promising starting point in assessing the strength of bringing together different regional voices to promote bilingualism. Code-switching between standard, international French Vancouver students, learn in the classroom versus more regional forms of Quebec French spoken locally created regional connectivity between students. The exchange was equally rewarding for both groups. Increasing their appreciation for each other’s regional differences allowed them to contribute actively to their social and emotional development. Within a sociolinguistic frame, this proposed model of experiential learning does not focus on hands-on work experience. However, the benefits of such exchanges are as valuable as work experience initiatives developed in experiential education. Students who actively code switch between French and English in real, not simulated contexts appreciate bilingualism more meaningfully and experience its value in concrete terms.Keywords: experiential learning, intercultural communication, social and emotional learning, sociolinguistic code-switching
Procedia PDF Downloads 138