Search results for: project implementation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9028

Search results for: project implementation

478 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests

Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski

Abstract:

Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.

Keywords: bone, creep, indentation, mechanical properties

Procedia PDF Downloads 172
477 Technology Assessment of the Collection of Cast Seaweed and Use as Feedstock for Biogas Production- The Case of SolrøD, Denmark

Authors: Rikke Lybæk, Tyge Kjær

Abstract:

The Baltic Sea is suffering from nitrogen and phosphorus pollution, which causes eutrophication of the maritime environment and hence threatens the biodiversity of the Baltic Sea area. The intensified quantity of nutrients in the water has created challenges with the growth of seaweed being discarded on beaches around the sea. The cast seaweed has led to odor problems hampering the use of beach areas around the Bay of Køge in Denmark. This is the case in, e.g., Solrød Municipality, where recreational activities have been disrupted when cast seaweed pile up on the beach. Initiatives have, however, been introduced within the municipality to remove the cast seaweed from the beach and utilize it for renewable energy production at the nearby Solrød Biogas Plant, thus being co-digested with animal manure for power and heat production. This paper investigates which type of technology application’s have been applied in the effort to optimize the collection of cast seaweed, and will further reveal, how the seaweed has been pre-treated at the biogas plant to be utilized for energy production the most efficient, hereunder the challenges connected with the content of sand. Heavy metal contents in the seaweed and how it is managed will also be addressed, which is vital as the digestate is utilized as soil fertilizer on nearby farms. Finally, the paper will outline the energy production scheme connected to the use of seaweed as feedstock for biogas production, as well as the amount of nitrogen-rich fertilizer produced. The theoretical approach adopted in the paper relies on the thinking of Circular Bio-Economy, where biological materials are cascaded and re-circulated etc., to increase and extend their value and usability. The data for this research is collected as part of the EU Interreg project “Cluster On Anaerobic digestion, environmental Services, and nuTrients removAL” (COASTAL Biogas), 2014-2020. Data gathering consists of, e.g., interviews with relevant stakeholders connected to seaweed collection and operation of the biogas plant in Solrød Municipality. It further entails studies of progress and evaluation reports from the municipality, analysis of seaweed digestion results from scholars connected to the research, as well as studies of scientific literature to supplement the above. Besides this, observations and photo documentation have been applied in the field. This paper concludes, among others, that the seaweed harvester technology currently adopted is functional in the maritime environment close to the beachfront but inadequate in collecting seaweed directly on the beach. New technology hence needs to be developed to increase the efficiency of seaweed collection. It is further concluded that the amount of sand transported to Solrød Biogas Plant with the seaweed continues to pose challenges. The seaweed is pre-treated for sand in a receiving tank with a strong stirrer, washing off the sand, which ends at the bottom of the tank where collected. The seaweed is then chopped by a macerator and mixed with the other feedstock. The wear down of the receiving tank stirrer and the chopper are, however, significant, and new methods should be adopted.

Keywords: biogas, circular bio-economy, Denmark, maritime technology, cast seaweed, solrød municipality

Procedia PDF Downloads 293
476 Methodological Approach for the Prioritization of Different Micro-Contaminants as Potential River Basin Specific Pollutants in the Upper Tisza River Watershed

Authors: Mihail Simion Beldean-Galea, Virginia Coman, Florina Copaciu, Mihaela Vlassa, Radu Mihaiescu, Adina Croitoru, Viorel Arghius, Modest Gertsiuk, Mikola Gertsiuk

Abstract:

Taking into consideration the huge number of chemicals released into environment compartments a proper environmental risk assessment is difficult to predict due to the gap of legislation and improper toxicological assessment of chemicals compounds. In Romania as well as in many other countries from Europe, the chemical status of the water body is characterized taking into consideration the Water Framework Directive (WFD) and the substances listed in Annex X. This Annex includes 45 substances from different classes of organic compounds and heavy metals for which AA-EQS and MAC-EQS have been established. For other compounds which are not included in Annex X, different methodologies to prioritize chemicals for risk assessment and monitoring has been proposed. These methodologies take into account Predicted No-Effect Concentrations (PNECs) of different classes of chemicals compounds available from existing risk assessments or from read-across models for acute toxicity to the standard test organisms such as Daphnia magna and Selenastrum capricornutum. Our work presents the monitoring results of 30 priority substances including polyaromatic hydrocarbons, pesticides, halogenated compounds, plasticizers and heavy metals and other 34 substances from different classes of pesticides and pharmaceuticals which are not included on the list of priority substances, performed in the Upper Tisza River Watershed from Romania and Ukraine. The obtained monitoring data were used for the establishment of the list of more relevant pollutants in the studied area and to establish the potential river basin specific pollutants. For this purpose, two indicators such as the Frequency of exceedance and Extent of exceedance of Predicted no-Effect Concentration (PNEC) were evaluated. These two indicators are based on maximum environmental concentrations (MECs) of priority substances and for other pollutants is use statistically based averages of obtained measured concentration compared to the lowest PNEC thresholds. From the obtained results it can be concluded that polyaromatic hydrocarbon such as Fluoranthene, Benzo[a]pyrene, Benzo[b]fluorathene, benzo[k]fluoranthene, Benzo(g.h.i)perylene, Indeno(1.2.3-cd)-pyrene, heavy metals such as Cadmium, Lead and Nickel can be considered as river basin specific pollutants, their concentration exceeding the Annual Average EQS concentration. Other compounds such as estrone, estriol, 174-β estradiol, naproxen or some antibiotics (Penicillin G, Tetracycline or Ceftazidime) should be taken into account for a long monitoring, in some cases their concentration exceeding PNEC. Acknowledgements: This work is performed in the frame of NATO SfP Programme, Project no. 984440.

Keywords: prioritization, river basin specific pollutants, Tisza River, water framework directive

Procedia PDF Downloads 305
475 The Effect of Extensive Mosquito Migration on Dengue Control as Revealed by Phylogeny of Dengue Vector Aedes aegypti

Authors: M. D. Nirmani, K. L. N. Perera, G. H. Galhena

Abstract:

Dengue has become one of the most important arbo-viral disease in all tropical and subtropical regions of the world. Aedes aegypti, is the principal vector of the virus, vary in both epidemiological and behavioral characteristics, which could be finely measured through DNA sequence comparison at their population level. Such knowledge in the population differences can assist in implementation of effective vector control strategies allowing to make estimates of the gene flow and adaptive genomic changes, which are important predictors of the spread of Wolbachia infection or insecticide resistance. As such, this study was undertaken to investigate the phylogenetic relationships of Ae. aegypti from Galle and Colombo, Sri Lanka, based on the ribosomal protein region which spans between two exons, in order to understand the geographical distribution of genetically distinct mosquito clades and its impact on mosquito control measures. A 320bp DNA region spanning from 681-930 bp, corresponding to the ribosomal protein, was sequenced in 62 Ae. aegypti larvae collected from Galle (N=30) and Colombo (N=32), Sri Lanka. The sequences were aligned using ClustalW and the haplotypes were determined with DnaSP 5.10. Phylogenetic relationships among haplotypes were constructed using the maximum likelihood method under Tamura 3 parameter model in MEGA 7.0.14 including three previously reported sequences of Australian (N=2) and Brazilian (N=1) Ae. aegypti. The bootstrap support was calculated using 1000 replicates and the tree was rooted using Aedes notoscriptus (GenBank accession No. KJ194101). Among all sequences, nineteen different haplotypes were found among which five haplotypes were shared between 80% of mosquitoes in the two populations. Seven haplotypes were unique to each of the population. Phylogenetic tree revealed two basal clades and a single derived clade. All observed haplotypes of the two Ae. aegypti populations were distributed in all the three clades, indicating a lack of genetic differentiation between populations. The Brazilian Ae. aegypti haplotype and one of the Australian haplotypes were grouped together with the Sri Lankan basal haplotype in the same basal clade, whereas the other Australian haplotype was found in the derived clade. Phylogram showed that Galle and Colombo Ae. aegypti populations are highly related to each other despite the large geographic distance (129 Km) indicating a substantial genetic similarity between them. This may have probably arisen from passive migration assisted by human travelling and trade through both land and water as the two areas are bordered by the sea. In addition, studied Sri Lankan mosquito populations were closely related to Australian and Brazilian samples. Probably this might have caused by shipping industry between the three countries as all of them are fully or partially enclosed by sea. For example, illegal fishing boats migrating to Australia by sea is perhaps a good mean of transportation of all life stages of mosquitoes from Sri Lanka. These findings indicate that extensive mosquito migrations occur between populations not only within the country, but also among other countries in the world which might be a main barrier to the successful vector control measures.

Keywords: Aedes aegypti, dengue control, extensive mosquito migration, haplotypes, phylogeny, ribosomal protein

Procedia PDF Downloads 190
474 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 230
473 Legislating for Public Participation and Environmental Justice: Whether It Solves or Prevent Disputes

Authors: Deborah A. Hollingworth

Abstract:

The key tenets associated with ‘environmental justice’, were first articulated in a global context in Principle 10 of the United Nations Declaration on Environment and Development at Rio de Janeiro in 1992 (the Rio Declaration). The elements can be conflated to require: public participation in decision-making; the provision of relevant information to those affected about environmental hazards issues; access to judicial and administrative proceeding; and the opportunity for redress where remedy where required. This paper examines the legislative and regulatory arrangements in place for the implementation these elements in a number of industrialised democracies, including Australia. Most have, over time made regulatory provision for these elements – even if they are not directly attributed Principle 10 or the notion of environmental justice. The paper proposes, that of these elements the most critical to the achievement of good environmental governance, is a legislated recognition and role of public participation. However, the paper considers that notwithstanding sound legislative and regulatory practices, environmental regulators frequently struggle, where there is a complex decision-making scenario or long-standing enmity between a community and industry to achieve effective engagement with the public. This study considers the dilemma confronted by environmental regulators to given meaningful effect to the principles enshrined in Principle 10 – that even when the legislative expression of Principle 10 is adhered to – does not prevent adverse outcomes. In particular, it considers, as a case study a prominent environmental incident in 2014 in Australia in which an open-cut coalmine located in the regional township of Morwell caught fire during bushfire season. The fire, which took 45 days to be extinguished had a significant and adverse impact on the community in question, but compounded a complex, and sometime antagonistic history between the mine and township. The case study exemplifies the complex factors that will often be present between industry, the public and regulatory bodies, and which confound the concept of environmental justice, and the elements of enshrined in the Principle 10 of the Rio Declaration. The study proposes that such tensions and complex examples will commonly be the reality of communities and regulators. However, to give practical effect to outcomes contemplated by Principle 10, the paper considers that regulators will may consider public intervention more broadly as including early interventions and formal opportunities for “conferencing” between industry, community and regulators. These initiatives help to develop a shared understanding and identification of issues. It is proposed that although important, options for “alternative dispute resolution” are not sufficiently preventative, as they come into play when a dispute has arise. Similarly “restorative justice” programs, while important once an incident or adverse environmental outcome has occurred, are post event and therefore necessarily limited. The paper considers the examples of how public participation at the outset – at the time of a proposal, before issues arise or eventuate to ensure, is demonstrably the most effective way for building commonality and an agreed methodology for working to resolve issues once they occur.

Keywords: environmental justice, alternative dispute resolution, domestic environmental law, international environmental law

Procedia PDF Downloads 309
472 Lake of Neuchatel: Effect of Increasing Storm Events on Littoral Transport and Coastal Structures

Authors: Charlotte Dreger, Erik Bollaert

Abstract:

This paper presents two environmentally-friendly coastal structures realized on the Lake of Neuchâtel. Both structures reflect current environmental issues of concern on the lake and have been strongly affected by extreme meteorological conditions between their period of design and their actual operational period. The Lake of Neuchatel is one of the biggest Swiss lakes and measures around 38 km in length and 8.2 km in width, for a maximum water depth of 152 m. Its particular topographical alignment, situated in between the Swiss Plateau and the Jura mountains, combines strong winds and large fetch values, resulting in significant wave heights during storm events at both north-east and south-west lake extremities. In addition, due to flooding concerns, historically, lake levels have been lowered by several meters during the Jura correction works in the 19th and 20th century. Hence, during storm events, continuous erosion of the vulnerable molasse shorelines and sand banks generate frequent and abundant littoral transport from the center of the lake to its extremities. This phenomenon does not only cause disturbances of the ecosystem, but also generates numerous problems at natural or man-made infrastructures located along the shorelines, such as reed plants, harbor entrances, canals, etc. A first example is provided at the southwestern extremity, near the city of Yverdon, where an ensemble of 11 small islands, the Iles des Vernes, have been artificially created in view of enhancing biological conditions and food availability for bird species during their migration process, replacing at the same time two larger islands that were affected by lack of morphodynamics and general vegetalization of their surfaces. The article will present the concept and dimensioning of these islands based on 2D numerical modelling, as well as the realization and follow-up campaigns. In particular, the influence of several major storm events that occurred immediately after the works will be pointed out. Second, a sediment retention dike is discussed at the northeastern extremity, at the entrance of the Canal de la Broye into the lake. This canal is heavily used for navigation and suffers from frequent and significant sedimentation at its outlet. The new coastal structure has been designed to minimize sediment deposits around the exutory of the canal into the lake, by retaining the littoral transport during storm events. The article will describe the basic assumptions used to design the dike, as well as the construction works and follow-up campaigns. Especially the huge influence of changing meteorological conditions on the littoral transport of the Lake of Neuchatel since project design ten years ago will be pointed out. Not only the intensity and frequency of storm events are increasing, but also the main wind directions alter, affecting in this way the efficiency of the coastal structure in retaining the sediments.

Keywords: meteorological evolution, sediment transport, lake of Neuchatel, numerical modelling, environmental measures

Procedia PDF Downloads 85
471 Development of a Social Assistive Robot for Elderly Care

Authors: Edwin Foo, Woei Wen, Lui, Meijun Zhao, Shigeru Kuchii, Chin Sai Wong, Chung Sern Goh, Yi Hao He

Abstract:

This presentation presents an elderly care and assistive social robot development work. We named this robot JOS and he is restricted to table top operation. JOS is designed to have a maximum volume of 3600 cm3 with its base restricted to 250 mm and his mission is to provide companion, assist and help the elderly. In order for JOS to accomplish his mission, he will be equipped with perception, reaction and cognition capability. His appearance will be not human like but more towards cute and approachable type. JOS will also be designed to be neutral gender. However, the robot will still have eyes, eyelid and a mouth. For his eyes and eyelids, they will be built entirely with Robotis Dynamixel AX18 motor. To realize this complex task, JOS will be also be equipped with micro-phone array, vision camera and Intel i5 NUC computer and a powered by a 12 V lithium battery that will be self-charging. His face is constructed using 1 motor each for the eyelid, 2 motors for the eyeballs, 3 motors for the neck mechanism and 1 motor for the lips movement. The vision senor will be house on JOS forehead and the microphone array will be somewhere below the mouth. For the vision system, Omron latest OKAO vision sensor is used. It is a compact and versatile sensor that is only 60mm by 40mm in size and operates with only 5V supply. In addition, OKAO vision sensor is capable of identifying the user and recognizing the expression of the user. With these functions, JOS is able to track and identify the user. If he cannot recognize the user, JOS will ask the user if he would want him to remember the user. If yes, JOS will store the user information together with the capture face image into a database. This will allow JOS to recognize the user the next time the user is with JOS. In addition, JOS is also able to interpret the mood of the user through the facial expression of the user. This will allow the robot to understand the user mood and behavior and react according. Machine learning will be later incorporated to learn the behavior of the user so as to understand the mood of the user and requirement better. For the speech system, Microsoft speech and grammar engine is used for the speech recognition. In order to use the speech engine, we need to build up a speech grammar database that captures the commonly used words by the elderly. This database is built from research journals and literature on elderly speech and also interviewing elderly what do they want to robot to assist them with. Using the result from the interview and research from journal, we are able to derive a set of common words the elderly frequently used to request for the help. It is from this set that we build up our grammar database. In situation where there is more than one person near JOS, he is able to identify the person who is talking to him through an in-house developed microphone array structure. In order to make the robot more interacting, we have also included the capability for the robot to express his emotion to the user through the facial expressions by changing the position and movement of the eyelids and mouth. All robot emotions will be in response to the user mood and request. Lastly, we are expecting to complete this phase of project and test it with elderly and also delirium patient by Feb 2015.

Keywords: social robot, vision, elderly care, machine learning

Procedia PDF Downloads 441
470 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking

Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz

Abstract:

Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.

Keywords: anatomy, e-learning, virtual reality, 3D model marking

Procedia PDF Downloads 100
469 Premature Departure of Active Women from the Working World: One Year Retrospective Study in the Tunisian Center

Authors: Lamia Bouzgarrou, Amira Omrane, Malika Azzouzi, Asma Kheder, Amira Saadallah, Ilhem Boussarsar, Kamel Rejeb

Abstract:

Introduction: Increasing the women’s labor force participation is a political issue in countries with developed economies and those with low growth prospects. However, in the labor market, women continue to face several obstacles, either for the integration or for the maintenance at work. This study aims to assess the prevalence of premature withdrawal from working life -due to invalidity or medical justified early retirement- among active women in the Tunisian center and to identify its determinants. Material and methods: We conducted a cross-sectional study, over one year, focusing on the agreement for invalidity or early retirement for premature usury of the body- delivered by the medical commission of the National Health Insurance Fund (CNAM) in the central Tunisian district. We exhaustively selected women's files. Data related to Socio-demographic characteristics, professional and medical ones, were collected from the CNAM's administrative and medical files. Results: During the period of one year, 222 women have had an agreement for premature departure of their professional activity. Indeed, 149 women (67.11%) benefit of from invalidity agreement and 20,27% of them from favorable decision for early retirement. The average age was 50 ± 6 years with extremes of 23 and 62 years, and 18.9% of women were under 45 years. Married women accounted for 69.4% and 59.9% of them had at least one dependent child in charge. The average professional seniority in the sector was 23 ± 8 years. The textile-clothing sector was the most affected, with 70.7% of premature departure. Medical reasons for withdrawal from working life were mainly related to neuro-degenerative diseases in 46.8% of cases, rheumatic ones in 35.6% of cases and cardiovascular diseases in 22.1% of them. Psychiatric and endocrine disorders motivated respectively 17.1% and 13.5% of these departures. The evaluation of the sequels induced by these pathologies concluded to an average permanent partial disability equal to 61.4 ± 17.3%. The analytical study concluded that the agreement of disability or early retirement was correlated with the insured ‘age (p = 10-3), the professional seniority (p = 0.003) and the permanent partial incapacity (PPI) rate assessed by the expert physician (p = 0.04). No other social or professional factors were correlated with this decision. Conclusion: Despite many advances in labour law and Tunisian legal text on employability, women still exposed to several social and professional inequalities (payment inequality, precarious work ...). Indeed, women are often pushed to accept working in adverse conditions, thus they are more vulnerable to develop premature wear on the body and being forced to premature departures from the world of work. These premature withdrawals from active life are not only harmful to the concerned women themselves, but also associated with considerable costs for the insurance organism and the society. In order to ensure maintenance at work for women, a political commitment is imperative in the implementation of global prevention strategies and the improvement of working conditions, particularly in our socio-cultural context.

Keywords: Active Women , Early Retirement , Invalidity , Maintenance at Work

Procedia PDF Downloads 152
468 Validation of an Educative Manual for Patients with Breast Cancer Submitted to Radiation Therapy

Authors: Flavia Oliveira de A. M. Cruz, Edison Tostes Faria, Paula Elaine D. Reis

Abstract:

When the breast is submitted to radiation therapy (RT), the most common effects are pain, skin changes, mobility restrictions, local sensory alteration, and fatigue. These effects, if not managed properly, may reduce the quality of life of cancer patients and may lead to the treatment discontinuation. Therefore, promoting knowledge and guidelines for symptom management remain a high priority for patients and a challenge for health professionals, due to the need to handle side effects in a population with a life-threatening disease. Printed materials are important strategies for supporting educative activities since they help the individual to assimilate and understand the amount of information transmitted. Nurses' behavior can be systematized through the use of an educative manual, which may be effective in promoting information regarding the treatment, self-care and how to control the effects of RT at home. In view of the importance of guaranteeing the validity of the material before its use, the objective of this research was to validate the content and appearance of an educative manual for breast cancer patients undergoing RT. The Theory of Psychometrics was used for the validation process in this descriptive methodological research. A minimum agreement rate (AR) of 80% was considered to guarantee the validity of the material. The data were collected from October to December 2017, by means of two assessments tools, constructed in the form of a Likert scale, with five levels of understanding. These instruments addressed different aspects of the evaluation, in view of two different groups of participants; 17 experts in the theme area of the educative manual, and 12 women that received RT previously to treat breast cancer. The manual was titled 'Orientation Manual: radiation therapy in breast', and was focused on breast cancer patients attended at the Department of Oncology of the Brasília University Hospital (UNACON/HUB). The research project was submitted to the Research Ethics Committee at the School of Health Sciences of the University of Brasília (CAAE: 24592213.1.0000.0030). Only two items of the assessment tool for the experts, one related to the manual's ability to promote behavioral and attitude changes and the other related to the extent of its use for other health services, obtained AR < 80% and were reformulated based on the participants' suggestions and in the literature. All other items were considered appropriate and/or complete appropriate in the three blocks proposed for the experts: objectives - 89%, structure and form - 93%, and relevance - 93%; and good and/or very good in the five blocks of analysis proposed for patients: objectives - 100%, organization - 100%, writing style - 100%, appearance - 100%, and motivation. The appearance and content validation of the educative manual proposed were attended to. The educative manual was considered relevant and pertinent and may contribute to the understanding of the therapeutic process by breast cancer patients during RT, as well as support clinical practice through the nursing consultation.

Keywords: oncology nursing, nursing care, validation studies, educational technology

Procedia PDF Downloads 126
467 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 147
466 Artificial Cells Capable of Communication by Using Polymer Hydrogel

Authors: Qi Liu, Jiqin Yao, Xiaohu Zhou, Bo Zheng

Abstract:

The first artificial cell was produced by Thomas Chang in the 1950s when he was trying to make a mimic of red blood cells. Since then, many different types of artificial cells have been constructed from one of the two approaches: a so-called bottom-up approach, which aims to create a cell from scratch, and a top-down approach, in which genes are sequentially knocked out from organisms until only the minimal genome required for sustaining life remains. In this project, bottom-up approach was used to build a new cell-free expression system which mimics artificial cell that capable of protein expression and communicate with each other. The artificial cells constructed from the bottom-up approach are usually lipid vesicles, polymersomes, hydrogels or aqueous droplets containing the nucleic acids and transcription-translation machinery. However, lipid vesicles based artificial cells capable of communication present several issues in the cell communication research: (1) The lipid vesicles normally lose the important functions such as protein expression within a few hours. (2) The lipid membrane allows the permeation of only small molecules and limits the types of molecules that can be sensed and released to the surrounding environment for chemical communication; (3) The lipid vesicles are prone to rupture due to the imbalance of the osmotic pressure. To address these issues, the hydrogel-based artificial cells were constructed in this work. To construct the artificial cell, polyacrylamide hydrogel was functionalized with Acrylate PEG Succinimidyl Carboxymethyl Ester (ACLT-PEG2000-SCM) moiety on the polymer backbone. The proteinaceous factors can then be immobilized on the polymer backbone by the reaction between primary amines of proteins and N-hydroxysuccinimide esters (NHS esters) of ACLT-PEG2000-SCM, the plasmid template and ribosome were encapsulated inside the hydrogel particles. Because the artificial cell could continuously express protein with the supply of nutrients and energy, the artificial cell-artificial cell communication and artificial cell-natural cell communication could be achieved by combining the artificial cell vector with designed plasmids. The plasmids were designed referring to the quorum sensing (QS) system of bacteria, which largely relied on cognate acyl-homoserine lactone (AHL) / transcription pairs. In one communication pair, “sender” is the artificial cell or natural cell that can produce AHL signal molecule by synthesizing the corresponding signal synthase that catalyzed the conversion of S-adenosyl-L-methionine (SAM) into AHL, while the “receiver” is the artificial cell or natural cell that can sense the quorum sensing signaling molecule form “sender” and in turn express the gene of interest. In the experiment, GFP was first immobilized inside the hydrogel particle to prove that the functionalized hydrogel particles could be used for protein binding. After that, the successful communication between artificial cell-artificial cell and artificial cell-natural cell was demonstrated, the successful signal between artificial cell-artificial cell or artificial cell-natural cell could be observed by recording the fluorescence signal increase. The hydrogel-based artificial cell designed in this work can help to study the complex communication system in bacteria, it can also be further developed for therapeutic applications.

Keywords: artificial cell, cell-free system, gene circuit, synthetic biology

Procedia PDF Downloads 152
465 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 137
464 Preparation of Biodegradable Methacrylic Nanoparticles by Semicontinuous Heterophase Polymerization for Drugs Loading: The Case of Acetylsalicylic Acid

Authors: J. Roberto Lopez, Hened Saade, Graciela Morales, Javier Enriquez, Raul G. Lopez

Abstract:

Implementation of systems based on nanostructures for drug delivery applications have taken relevance in recent studies focused on biomedical applications. Although there are several nanostructures as drugs carriers, the use of polymeric nanoparticles (PNP) has been widely studied for this purpose, however, the main issue for these nanostructures is the size control below 50 nm with a narrow distribution size, due to they must go through different physiological barriers and avoid to be filtered by kidneys (< 10 nm) or the spleen (> 100 nm). Thus, considering these and other factors, it can be mentioned that drug-loaded nanostructures with sizes varying between 10 and 50 nm are preferred in the development and study of PNP/drugs systems. In this sense, the Semicontinuous Heterophase Polymerization (SHP) offers the possibility to obtain PNP in the desired size range. Considering the above explained, methacrylic copolymer nanoparticles were obtained under SHP. The reactions were carried out in a jacketed glass reactor with the required quantities of water, ammonium persulfate as initiator, sodium dodecyl sulfate/sodium dioctyl sulfosuccinate as surfactants, methyl methacrylate and methacrylic acid as monomers with molar ratio of 2/1, respectively. The monomer solution was dosed dropwise during reaction at 70 °C with a mechanical stirring of 650 rpm. Nanoparticles of poly(methyl methacrylate-co-methacrylic acid) were loaded with acetylsalicylic acid (ASA, aspirin) by a chemical adsorption technique. The purified latex was put in contact with a solution of ASA in dichloromethane (DCM) at 0.1, 0.2, 0.4 or 0.6 wt-%, at 35°C during 12 hours. According to the boiling point of DCM, as well as DCM and water densities, the loading process is completed when the whole DCM is evaporated. The hydrodynamic diameter was measured after polymerization by quasi-elastic light scattering and transmission electron microscopy, before and after loading procedures with ASA. The quantitative and qualitative analyses of PNP loaded with ASA were measured by infrared spectroscopy, differential scattering calorimetry and thermogravimetric analysis. Also, the molar mass distributions of polymers were determined in a gel permeation chromatograph apparatus. The load capacity and efficiency were determined by gravimetric analysis. The hydrodynamic diameter results for methacrylic PNP without ASA showed a narrow distribution with an average particle size around 10 nm and a composition methyl methacrylate/methacrylic acid molar ratio equal to 2/1, same composition of Eudragit S100, which is a commercial compound widely used as excipient. Moreover, the latex was stabilized in a relative high solids content (around 11 %), a monomer conversion almost 95 % and a number molecular weight around 400 Kg/mol. The average particle size in the PNP/aspirin systems fluctuated between 18 and 24 nm depending on the initial percentage of aspirin in the loading process, being the drug content as high as 24 % with an efficiency loading of 36 %. These average sizes results have not been reported in the literature, thus, the methacrylic nanoparticles here reported are capable to be loaded with a considerable amount of ASA and be used as a drug carrier.

Keywords: aspirin, biocompatibility, biodegradable, Eudragit S100, methacrylic nanoparticles

Procedia PDF Downloads 140
463 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model

Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski

Abstract:

Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.

Keywords: business model, capacity building, entrepreneurial education, knowledge transfer

Procedia PDF Downloads 141
462 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 158
461 Criticality of Socio-Cultural Factors in Public Policy: A Study of Reproductive Health Care in Rural West Bengal

Authors: Arindam Roy

Abstract:

Public policy is an intriguing terrain, which involves complex interplay of administrative, social political and economic components. There is hardly any fit-for all formulation of public policy as Lindbloom has aptly categorized it as a science of muddling through. In fact, policies are both temporally and contextually determined as one the proponents of policy sciences Harold D Lasswell has underscored it in his ‘contextual-configurative analysis’ as early as 1950s. Though, a lot of theoretical efforts have been made to make sense of this intricate dynamics of policy making, at the end of the day the applied area of public policy negates any such uniform, planned and systematic formulation. However, our policy makers seem to have learnt very little of that. Until recently, policy making was deemed as an absolutely specialized exercise to be conducted by a cadre of professionally trained seasoned mandarin. Attributes like homogeneity, impartiality, efficiency, and neutrality were considered as the watchwords of delivering common goods. Citizen or clientele was conceptualized as universal political or economic construct, to be taken care of uniformly. Moreover, policy makers usually have the proclivity to put anything into straightjacket, and to ignore the nuances therein. Hence, least attention has been given to the ground level reality, especially the socio-cultural milieu where the policy is supposed to be applied. Consequently, a substantial amount of public money goes in vain as the intended beneficiaries remain indifferent to the delivery of public policies. The present paper in the light of Reproductive Health Care policy in rural West Bengal has tried to underscore the criticality of socio-cultural factors in public health delivery. Indian health sector has traversed a long way. From a near non-existent at the time of independence, the Indian state has gradually built a country-wide network of health infrastructure. Yet it has to make a major breakthrough in terms of coverage and penetration of the health services in the rural areas. Several factors are held responsible for such state of things. These include lack of proper infrastructure, medicine, communication, ambulatory services, doctors, nursing services and trained birth attendants. Policy makers have underlined the importance of supply side in policy formulation and implementation. The successive policy documents concerning health delivery bear the testimony of it. The present paper seeks to interrogate the supply-side oriented explanations for the failure of the delivery of health services. Instead, it identified demand side to find out the answer. The state-led and bureaucratically engineered public health measures fail to engender demands as these measures mostly ignore socio-cultural nuances of health and well-being. Hence, the hiatus between supply side and demand side leads to huge wastage of revenue as health infrastructure, medicine and instruments remain unutilized in most cases. Therefore, taking proper cognizance of these factors could have streamlined the delivery of public health.

Keywords: context, policy, socio-cultural factor, uniformity

Procedia PDF Downloads 316
460 Teaching English for Children in Public Schools Can Work in Egypt

Authors: Shereen Kamel

Abstract:

This study explores the recent application of bilingual education in Egyptian public schools. It aims to provide an overall picture of bilingual education programs globally and examine its adequacy to the Egyptian social and cultural context. The study also assesses the current application process of teaching English as a Second Language in public schools from the early childhood education stage and onwards, instead of starting it from middle school; as a strategy that promotes English language proficiency and equity among students. The theoretical framework is based on Jim Cummins’ bilingual education theories and on recent trends adopting different developmental theories and perspectives, like Stephen Crashen’s theory of Second Language Acquisition that calls for communicative and meaningful interaction rather than memorization of grammatical rules. The question posed here is whether bilingual education, with its peculiar nature, could be a good chance to reach out to all Egyptian students and prepare them to become global citizens. In addition to this, a more specific question is related to the extent to which social and cultural variables can affect the young learners’ second language acquisition. This exploratory analytical study uses mixed-methods research design to examine the application of bilingual education in Egyptian public schools. The study uses a cluster sample of schools in Egypt from different social and cultural backgrounds to assess the determining variables. The qualitative emphasis is on interviewing teachers and reviewing students’ achievement documents. The quantitative aspect is based on observations of in-class activities through tally sheets and checklists. Having access to schools and documents is authorized by governmental and institutional research bodies. Data sources will comprise achievement records, students’ portfolios, parents’ feedback and teachers’ viewpoints. Triangulation and SPSS will be used for analysis. Based on the gathered data, new curricula have been assigned for elementary grades and teachers have been required to teach the newly developed materials all of a sudden without any prior training. Due to shortage in the teaching force, many assigned teachers have not been proficient in the English language. Hence, teachers’ incompetency and unpreparedness to teach this grade specific curriculum constitute a great challenge in the implementation phase. Nevertheless, the young learners themselves as well as their parents seem to be enthusiastic about the idea itself. According to the findings of this research study, teaching English as a Second Language to children in public schools can be applicable and is culturally relevant to the Egyptian context. However, there might be some social and cultural differences and constraints when it comes to application in addition to various aspects regarding teacher preparation. Therefore, a new mechanism should be incorporated to overcome these challenges for better results. Moreover, a new paradigm shift in these teacher development programs is direly needed. Furthermore, ongoing support and follow up are crucial to help both teachers and students realize the desired outcomes.

Keywords: bilingual education, communicative approach, early childhood education, language and culture, second language acquisition

Procedia PDF Downloads 118
459 Phenotypic and Molecular Heterogeneity Linked to the Magnesium Transporter CNNM2

Authors: Reham Khalaf-Nazzal, Imad Dweikat, Paula Gimenez, Iker Oyenarte, Alfonso Martinez-Cruz, Domonik Muller

Abstract:

Metal cation transport mediator (CNNM) gene family comprises 4 isoforms that are expressed in various human tissues. Structurally, CNNMs are complex proteins that contain an extracellular N-terminal domain preceding a DUF21 transmembrane domain, a ‘Bateman module’ and a C-terminal cNMP-binding domain. Mutations in CNNM2 cause familial dominant hypomagnesaemia. Growing evidence highlights the role of CNNM2 in neurodevelopment. Mutations in CNNM2 have been implicated in epilepsy, intellectual disability, schizophrenia, and others. In the present study, we aim to elucidate the function of CNNM2 in the developing brain. Thus, we present the genetic origin of symptoms in two family cohorts. In the first family, three siblings of a consanguineous Palestinian family in which parents are first cousins, and consanguinity ran over several generations, presented a varying degree of intellectual disability, cone-rod dystrophy, and autism spectrum disorder. Exome sequencing and segregation analysis revealed the presence of homozygous pathogenic mutation in the CNNM2 gene, the parents were heterozygous for that gene mutation. Magnesium blood levels were normal in the three children and their parents in several measurements. They had no symptoms of hypomagnesemia. The CNNM2 mutation in this family was found to locate in the CBS1 domain of the CNNM2 protein. The crystal structure of the mutated CNNM2 protein was not significantly different from the wild-type protein, and the binding of AMP or MgATP was not dramatically affected. This suggests that the CBS1 domain could be involved in pure neurodevelopmental functions independent of its magnesium-handling role, and this mutation could have affected a protein partner binding or other functions in this protein. In the second family, another autosomal dominant CNNM2 mutation was found to run in a large family with multiple individuals over three generations. All affected family members had hypomagnesemia and hypermagnesuria. Oral supplementation of magnesium did not increase the levels of magnesium in serum significantly. Some affected members of this family have defects in fine motor skills such as dyslexia and dyslalia. The detected mutation is located in the N-terminal part, which contains a signal peptide thought to be involved in the sorting and routing of the protein. In this project, we describe heterogenous clinical phenotypes related to CNNM2 mutations and protein functions. In the first family, and up to the authors’ knowledge, we report for the first time the involvement of CNNM2 in retinal photoreceptor development and function. In addition, we report the presence of a neurophenotype independent of magnesium status related to the CNNM2 protein mutation. Taking into account the different modes of inheritance and the different positions of the mutations within CNNM2 and its different structural and functional domains, it is likely that CNNM2 might be involved in a wide spectrum of neuropsychiatric comorbidities with considerable varying phenotypes.

Keywords: magnesium transport, autosomal recessive, autism, neurodevelopment, CBS domain

Procedia PDF Downloads 150
458 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 57
457 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 277
456 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region

Authors: Bhakti Chitale

Abstract:

Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.

Keywords: Mumbai India, slum housing, region planning, market recommendations

Procedia PDF Downloads 280
455 Transformative Economic Policies in India: A Political Economy Analysis of IMF Influence, Sectoral Shifts, and Political Transitions

Authors: Vrajesh Rawal

Abstract:

India's economic landscape has witnessed significant transformations over the past decades, characterized by shifts from agrarian to service-oriented economies. Recently, there has been a growing emphasis on transitioning towards a manufacturing-led growth model driven by factors such as demographic changes, technological advancements, and evolving global trade dynamics. These changes reflect broader efforts to enhance industrialization, boost employment opportunities, and diversify the economic base beyond traditional sectors. Within this context, this research focuses on understanding the specific drivers and dynamics behind India's shift from a predominantly service-based economy to one centered on manufacturing. It seeks to explore how political ideologies influence economic policies and shape sectoral priorities, with a particular focus on contrasting approaches between the Indian National Congress (INC) and the Bharatiya Janata Party (BJP). Additionally, the study evaluates the alignment of IMF policy recommendations with India's economic goals and priorities within the theoretical frameworks of neoliberalism and political economy theory. Despite the extensive literature on India's economic reforms and political economy, there remains a gap in understanding how political ideology influences sectoral shifts and economic policy outcomes, particularly in the context of IMF recommendations. Existing studies often focus narrowly on either political ideologies or economic reforms without fully integrating both perspectives. This research aims to bridge this gap by providing a comprehensive analysis that integrates political economy theories with empirical evidence from political speeches, government documents, and IMF reports. Through qualitative content analysis of speeches by political leaders, document analysis of key governmental documents, and scrutiny of party manifestos, this research demonstrates how political ideologies translate into distinct economic strategies and developmental agendas. It highlights the extent to which IMF policy prescriptions align with India's economic objectives and how these interactions shape broader socio-economic outcomes. The theoretical framework of neoliberalism and political economy theory provides a lens to interpret these findings, offering insights into the complex interplay between economic policies, political ideologies, and institutional frameworks in India. The findings of this study are expected to provide valuable insights for policymakers, researchers, and practitioners involved in economic governance and development planning in India. By understanding the factors driving sectoral shifts and the influence of political ideologies on economic policies, policymakers can make informed decisions to foster sustainable economic growth and development. Implementation of these insights could contribute to refining policy frameworks, enhancing alignment with national development priorities, and optimizing engagement with international financial institutions like the IMF to better meet India's socio-economic challenges and opportunities in the evolving global context.

Keywords: political economy, international politics, social science, policy analysis

Procedia PDF Downloads 32
454 Assessing of Social Comfort of the Russian Population with Big Data

Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro

Abstract:

The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.

Keywords: big data, Google trends, integral indicator, social comfort

Procedia PDF Downloads 200
453 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 326
452 The Death of Ruan Lingyu: Leftist Aesthetics and Cinematic Reality in the 1930s Shanghai

Authors: Chen Jin

Abstract:

This topic seeks to re-examine the New Women Incident in 1935 Shanghai from the perspective of the influence of leftist cinematic aesthetics on public discourse in 1930s Shanghai. Accordingly, an original means of interpreting the death of Ruan Lingyu will be provided. On 8th March 1935, Ruan Lingyu, the queen of Chinese silent film, committed suicide through overdosing on sleeping tablets. Her last words, ‘gossip is fearful thing’, interlinks her destiny with the protagonist she played in the film The New Women (Cai Chusheng, 1935). The coincidence was constantly questioned by the masses following her suicide, constituting the enduring question: ‘who killed Ruan Lingyu?’ Responding to this query, previous scholars primarily analyze the characters played by women -particularly new women as part of the leftist movement or public discourse of 1930s Shanghai- as a means of approaching the truth. Nevertheless, alongside her status as a public celebrity, Ruan Lingyu also plays as a screen image of mechanical reproduction. The overlap between her screen image and personal destiny attracts limited academic focus in terms of the effect and implications of leftist aesthetics of reality in relation to her death, which itself has provided impetus to this research. With the reconfiguration of early Chinese film theory in the 1980s, early discourses on the relationship between cinematic reality and consciousness proposed by Hou Yao and Gu Kenfu in the 1920s are integrated into the category of Chinese film ontology, which constitutes a transcultural contrast with the Euro-American ontology that advocates the representation of reality. The discussion of Hou and Gu overlaps cinematic reality with effect, which emphasizes the empathy of cinema that is directly reflected in the leftist aesthetics of the 1930s. As the main purpose of leftist cinema is to encourage revolution through depicting social reality truly, Ruan Lingyu became renowned for her natural and realistic acting proficiency, playing leading roles in several esteemed leftist films. The realistic reproduction and natural acting skill together constitute the empathy of leftist films, which establishes a dialogue with the virtuous female image within the 1930s public discourse. On this basis, this research considers Chinese cinematic ontology and affect theory as the theoretical foundation for investigating the relationship between the screen image of Ruan Lingyu reproduced by the leftist film The New Women and the female image in the 1930s public discourse. Through contextualizing Ruan Lingyu’s death within the Chinese leftist movement, the essay indicates that the empathy embodied within leftist cinematic reality limits viewers’ cognition of the actress, who project their sentiments for the perfect screen image on to Ruan Lingyu’s image in reality. Essentially, Ruan Lingyu is imprisoned in her own perfect replication. Consequently, this article states that alongside leftist anti-female consciousness, the leftist aesthetics of reality restricts women in a passive position within public discourse, which ultimately plays a role in facilitating the death of Ruan Lingyu.

Keywords: cinematic reality, leftist aesthetics, Ruan Lingyu, The New Women

Procedia PDF Downloads 119
451 Mesenchymal Stem Cells on Fibrin Assemblies with Growth Factors

Authors: Elena Filova, Ondrej Kaplan, Marie Markova, Helena Dragounova, Roman Matejka, Eduard Brynda, Lucie Bacakova

Abstract:

Decellularized vessels have been evaluated as small-diameter vascular prostheses. Reseeding autologous cells onto decellularized tissue prior implantation should prolong prostheses function and make them living tissues. Suitable cell types for reseeding are both endothelial cells and bone marrow-derived stem cells, with a capacity for differentiation into smooth muscle cells upon mechanical loading. Endothelial cells assure antithrombogenicity of the vessels and MSCs produce growth factors and, after their differentiation into smooth muscle cells, they are contractile and produce extracellular matrix proteins as well. Fibrin is a natural scaffold, which allows direct cell adhesion based on integrin receptors. It can be prepared autologous. Fibrin can be modified with bound growth factors, such as basic fibroblast growth factor (FGF-2) and vascular endothelial growth factor (VEGF). These modifications in turn make the scaffold more attractive for cells ingrowth into the biological scaffold. The aim of the study was to prepare thin surface-attached fibrin assemblies with bound FGF-2 and VEGF, and to evaluate growth and differentiation of bone marrow-derived mesenchymal stem cells on the fibrin (Fb) assemblies. Following thin surface-attached fibrin assemblies were prepared: Fb, Fb+VEGF, Fb+FGF2, Fb+heparin, Fb+heparin+VEGF, Fb+heparin+FGF2, Fb+heparin+FGF2+VEGF. Cell culture poly-styrene and glass coverslips were used as controls. Human MSCs (passage 3) were seeded at the density of 8800 cells/1.5 mL alpha-MEM medium with 2.5% FS and 200 U/mL aprotinin per well of a 24-well cell culture. The cells have been cultured on the samples for 6 days. Cell densities on day 1, 3, and 6 were analyzed after staining with LIVE/DEAD cytotoxicity/viability assay kit. The differentiation of MSCs is being analyzed using qPCR. On day 1, the highest density of MSCs was observed on Fb+VEGF and Fb+FGF2. On days 3 and 6, there were similar densities on all samples. On day 1, cell morphology was polygonal and spread on all sample. On day 3 and 6, MSCs growing on Fb assemblies with FGF2 became apparently elongated. The evaluation of expression of genes for von Willebrand factor and CD31 (endothelial cells), for alpha-actin (smooth muscle cells), and for alkaline phosphatase (osteoblasts) is in progress. We prepared fibrin assemblies with bound VEGF and FGF-2 that supported attachment and growth of mesenchymal stem cells. The layers are promising for improving the ingrowth of MSCs into the biological scaffold. Supported by the Technology Agency of the Czech Republic TA04011345, and Ministry of Health NT11270-4/2010, and BIOCEV – Biotechnology and Biomedicine Centre of the Academy of Sciences and Charles University” project (CZ.1.05/1.1.00/02.0109), funded by the European Regional Development Fund for their financial supports.

Keywords: fibrin assemblies, FGF-2, mesenchymal stem cells, VEGF

Procedia PDF Downloads 325
450 Using Business Interactive Games to Improve Management Skills

Authors: Nuno Biga

Abstract:

Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.

Keywords: business interactive games, gamification, management empowerment skills, simulation living labs

Procedia PDF Downloads 112
449 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 105