Search results for: 3D human body model
658 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 150657 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 30656 Working Towards More Sustainable Food Waste: A Circularity Perspective
Authors: Rocío González-Sánchez, Sara Alonso-Muñoz
Abstract:
Food waste implies an inefficient management of the final stages in the food supply chain. Referring to Sustainable Development Goals (SDGs) by United Nations, the SDG 12.3 proposes to halve per capita food waste at the retail and consumer level and to reduce food losses. In the linear system, food waste is disposed and, to a lesser extent, recovery or reused after consumption. With the negative effect on stocks, the current food consumption system is based on ‘produce, take and dispose’ which put huge pressure on raw materials and energy resources. Therefore, greater focus on the circular management of food waste will mitigate the environmental, economic, and social impact, following a Triple Bottom Line (TBL) approach and consequently the SDGs fulfilment. A mixed methodology is used. A total sample of 311 publications from Web of Science database were retrieved. Firstly, it is performed a bibliometric analysis by SciMat and VOSviewer software to visualise scientific maps about co-occurrence analysis of keywords and co-citation analysis of journals. This allows for the understanding of the knowledge structure about this field, and to detect research issues. Secondly, a systematic literature review is conducted regarding the most influential articles in years 2020 and 2021, coinciding with the most representative period under study. Thirdly, to support the development of this field it is proposed an agenda according to the research gaps identified about circular economy and food waste management. Results reveal that the main topics are related to waste valorisation, the application of waste-to-energy circular model and the anaerobic digestion process towards fossil fuels replacement. It is underlined that the use of food as a source of clean energy is receiving greater attention in the literature. There is a lack of studies about stakeholders’ awareness and training. In addition, available data would facilitate the implementation of circular principles for food waste recovery, management, and valorisation. The research agenda suggests that circularity networks with suppliers and customers need to be deepened. Technological tools for the implementation of sustainable business models, and greater emphasis on social aspects through educational campaigns are also required. This paper contributes on the application of circularity to food waste management by abandoning inefficient linear models. Shedding light about trending topics in the field guiding to scholars for future research opportunities.Keywords: bibliometric analysis, circular economy, food waste management, future research lines
Procedia PDF Downloads 112655 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine
Authors: Mohamed K. Khalil, Mohamed S. Ragab
Abstract:
Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5
Procedia PDF Downloads 165654 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure
Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska
Abstract:
Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.Keywords: autophagy, cell death, digestive system, ultrastructure
Procedia PDF Downloads 233653 Pickering Dry Emulsion System for Dissolution Enhancement of Poorly Water Soluble Drug (Fenofibrate)
Authors: Nitin Jadhav, Pradeep R. Vavia
Abstract:
Poor water soluble drugs are difficult to promote for oral drug delivery as they demonstrate poor and variable bioavailability because of its poor solubility and dissolution in GIT fluid. Nowadays lipid based formulations especially self microemulsifying drug delivery system (SMEDDS) is found as the most effective technique. With all the impressive advantages, the need of high amount of surfactant (50% - 80%) is the major drawback of SMEDDS. High concentration of synthetic surfactant is known for irritation in GIT and also interference with the function of intestinal transporters causes changes in drug absorption. Surfactant may also reduce drug activity and subsequently bioavailability due to the enhanced entrapment of drug in micelles. In chronic treatment these issues are very conspicuous due to the long exposure. In addition the liquid self microemulsifying system also suffers from stability issues. Recently one novel approach of solid stabilized micro and nano emulsion (Pickering emulsion) has very admirable properties such as high stability, absence or very less concentration of surfactant and easily converts into the dry form. So here we are exploring pickering dry emulsion system for dissolution enhancement of anti-lipemic, extremely poorly water soluble drug (Fenofibrate). Oil moiety for emulsion preparation was selected mainly on the basis of higher solubility of drug. Captex 300 was showed higher solubility for fenofibrate, hence selected as oil for emulsion. With Silica (solid stabilizer); Span 20 was selected to improve the wetting property of it. Emulsion formed by Silica and Span20 as stabilizer at the ratio 2.5:1 (silica: span 20) was found very stable at the particle size 410 nm. The prepared emulsion was further preceded for spray drying and formed microcapsule evaluated for in-vitro dissolution study, in-vivo pharmacodynamic study and characterized for DSC, XRD, FTIR, SEM, optical microscopy etc. The in vitro study exhibits significant dissolution enhancement of formulation (85 % in 45 minutes) as compared to plain drug (14 % in 45 minutes). In-vivo study (Triton based hyperlipidaemia model) exhibits significant reduction in triglyceride and cholesterol with formulation as compared to plain drug indicating increasing in fenofibrate bioavailability. DSC and XRD study exhibit loss of crystallinity of drug in microcapsule form. FTIR study exhibit chemical stability of fenofibrate. SEM and optical microscopy study exhibit spherical structure of globule coated with solid particles.Keywords: captex 300, fenofibrate, pickering dry emulsion, silica, span20, stability, surfactant
Procedia PDF Downloads 498652 Strategies for Incorporating Intercultural Intelligence into Higher Education
Authors: Hyoshin Kim
Abstract:
Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education
Procedia PDF Downloads 211651 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution
Procedia PDF Downloads 99650 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 51649 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 229648 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 119647 Advanced Palliative Aquatics Care Multi-Device AuBento for Symptom and Pain Management by Sensorial Integration and Electromagnetic Fields: A Preliminary Design Study
Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, J.C. Mairesse Siluk, I. F. Minello, F. dos Santos de Oliveira
Abstract:
Background: Although palliative care policies and services have been developed, research in this area continues to lag. An integrated model of palliative care is suggested, which includes complementary and alternative services aimed at improving the well-being of patients and their families. The palliative aquatics care multi-device (AuBento) uses several electromagnetic techniques to decrease pain and promote well-being through relaxation and interaction among patients, specialists, and family members. Aim: The scope of this paper is to present a preliminary design study of a device capable of exploring the various existing theories on the biomedical application of magnetic fields. This will be achieved by standardizing clinical data collection with sensory integration, and adding new therapeutic options to develop an advanced palliative aquatics care, innovating in symptom and pain management. Methods: The research methodology was based on the Work Package Methodology for the development of projects, separating the activities into seven different Work Packages. The theoretical basis was carried out through an integrative literature review according to the specific objectives of each Work Package and provided a broad analysis, which, together with the multiplicity of proposals and the interdisciplinarity of the research team involved, generated consistent and understandable complex concepts in the biomedical application of magnetic fields for palliative care. Results: Aubento ambience was idealized with restricted electromagnetic exposure (avoiding data collection bias) and sensory integration (allowing relaxation associated with hydrotherapy, music therapy, and chromotherapy or like floating tank). This device has a multipurpose configuration enabling classic or exploratory options on the use of the biomedical application of magnetic fields at the researcher's discretion. Conclusions: Several patients in diverse therapeutic contexts may benefit from the use of magnetic fields or fluids, thus validating the stimuli to clinical research in this area. A device in controlled and multipurpose environments may contribute to standardizing research and exploring new theories. Future research may demonstrate the possible benefits of the aquatics care multi-device AuBento to improve the well-being and symptom control in palliative care patients and their families.Keywords: advanced palliative aquatics care, magnetic field therapy, medical device, research design
Procedia PDF Downloads 128646 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 95645 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows
Authors: D. M. Lewis, L. Frisby, U. Stead
Abstract:
Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations
Procedia PDF Downloads 101644 Childhood Adversity and Delinquency in Youth: Self-Esteem and Depression as Mediators
Authors: Yuhui Liu, Lydia Speyer, Jasmin Wertz, Ingrid Obsuth
Abstract:
Childhood adversities refer to situations where a child's basic needs for safety and support are compromised, leading to substantial disruptions in their emotional, cognitive, social, or neurobiological development. Given the prevalence of adversities (8%-39%), their impact on developmental outcomes is challenging to completely avoid. Delinquency is an important consequence of childhood adversities, given its potential causing violence and other forms of victimisation, influencing victims, delinquents, their families, and the whole of society. Studying mediators helps explain the link between childhood adversity and delinquency, which aids in designing effective intervention programs that target explanatory variables to disrupt the path and mitigate the effects of childhood adversities on delinquency. The Dimensional Model of Adversity and Psychopathology suggests that threat-based adversities influence outcomes through emotion processing, while deprivation-based adversities do so through cognitive mechanisms. Thus, considering a wide range of threat-based and deprivation-based adversities and their co-occurrence and their associations with delinquency through cognitive and emotional mechanisms is essential. This study employs the Millennium Cohort Study, tracking the development of approximately 19,000 individuals born across England, Scotland, Wales and Northern Ireland, representing a nationally representative sample. Parallel mediation models compare the mediating roles of self-esteem (cognitive) and depression (affective) in the associations between childhood adversities and delinquency. Eleven types of childhood adversities were assessed both individually and through latent class analysis, considering adversity experiences from birth to early adolescence. This approach aimed to capture how threat-based, deprived-based, or combined threat and deprived-based adversities are associated with delinquency. Eight latent classes were identified: three classes (low adversity, especially direct and indirect violence; low childhood and moderate adolescent adversities; and persistent poverty with declining bullying victimisation) were negatively associated with delinquency. In contrast, three classes (high parental alcohol misuse, overall high adversities, especially regarding household instability, and high adversity) were positively associated with delinquency. When mediators were included, all classes showed a significant association with delinquency through depression, but not through self-esteem. Among the eleven single adversities, seven were positively associated with delinquency, with five linked through depression and none through self-esteem. The results imply the importance of affective variables, not just for threat-based but also deprivation-based adversities. Academically, this suggests exploring other mechanisms linking adversities and delinquency since some adversities are linked through neither depression nor self-esteem. Clinically, intervention programs should focus on affective variables like depression to mitigate the effects of childhood adversities on delinquency.Keywords: childhood adversity, delinquency, depression, self-esteem
Procedia PDF Downloads 32643 No-Par Shares Working in European LLCs
Authors: Agnieszka P. Regiec
Abstract:
Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test
Procedia PDF Downloads 183642 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 136641 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 54640 Bivariate Analyses of Factors That May Influence HIV Testing among Women Living in the Democratic Republic of the Congo
Authors: Danielle A. Walker, Kyle L. Johnson, Patrick J. Fox, Jacen S. Moore
Abstract:
The HIV Continuum of Care has become a universal model to provide context for the process of HIV testing, linkage to care, treatment, and viral suppression. HIV testing is the first step in moving toward community viral suppression. Countries with a lower socioeconomic status experience the lowest rates of testing and access to care. The Democratic Republic of the Congo is located in the heart of sub-Saharan Africa, where testing and access to care are low and women experience higher HIV prevalence compared to men. In the Democratic Republic of the Congo there is only a 21.6% HIV testing rate among women. Because a critical gap exists between a woman’s risk of contracting HIV and the decision to be tested, this study was conducted to obtain a better understanding of the relationship between factors that could influence HIV testing among women. The datasets analyzed were from the 2013-14 Democratic Republic of the Congo Demographic and Health Survey Program. The data was subset for women with an age range of 18-49 years. All missing cases were removed and one variable was recoded. The total sample size analyzed was 14,982 women. The results showed that there did not seem to be a difference in HIV testing by mean age. Out of 11 religious categories (Catholic, Protestant, Armee de salut, Kimbanguiste, Other Christians, Muslim, Bundu dia kongo, Vuvamu, Animist, no religion, and other), those who identified as Other Christians had the highest testing rate of 25.9% and those identified as Vuvamu had a 0% testing rate (p<0.001). There was a significant difference in testing by religion. Only 0.7% of women surveyed identified as having no religious affiliation. This suggests partnerships with key community and religious leaders could be a tool to increase testing. Over 60% of women who had never been tested for HIV did not know where to be tested. This highlights the need to educate communities on where testing facilities can be located. Almost 80% of women who believed HIV could be transmitted by supernatural means and/or witchcraft had never been tested before (p=0.08). Cultural beliefs could influence risk perception and testing decisions. Consequently, misconceptions need to be considered when implementing HIV testing and prevention programs. Location by province, years of education, and wealth index were also analyzed to control for socioeconomic status. Kinshasa had the highest testing rate of 54.2% of women living there, and both Equateur and Kasai-Occidental had less than a 10% testing rate (p<0.001). As the education level increased up to 12 years, testing increased (p<0.001). Women within the highest quintile of the wealth index had a 56.1% testing rate, and women within the lowest quintile had a 6.5% testing rate (p<0.001). This study concludes that further research is needed to identify culturally competent methods to increase HIV education programs, build partnerships with key community leaders, and improve knowledge on access to care.Keywords: Democratic Republic of the Congo, cultural beliefs, education, HIV testing
Procedia PDF Downloads 287639 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 103638 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout
Authors: Diamant Irene, Shklarnik Batya
Abstract:
In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19
Procedia PDF Downloads 190637 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 207636 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 146635 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 283634 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 216633 First-Trimester Screening of Preeclampsia in a Routine Care
Authors: Tamar Grdzelishvili, Zaza Sinauridze
Abstract:
Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein
Procedia PDF Downloads 77632 Teaching of Entrepreneurship and Innovation in Brazilian Universities
Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
Teaching of entrepreneurship and innovation in Brazilian universities has increased in recent years due to several factors such as the emergence of disciplines like biotechnology increased globalization reduced basic funding and new perspectives on the role of the university in the system of knowledge production Innovation is increasingly seen as an evolutionary process that involves different institutional spheres or sectors in society Entrepreneurship is a milestone on the road towards economic progress, and makes a huge contribution towards the quality and future hopes of a sector, economy or even a country. Entrepreneurship is as important in small and medium-sized enterprises (SMEs) and local markets as in large companies, and national and international markets, and is just as key a consideration for public companies as or private organizations. Entrepreneurship helps to encourage the competition in the current environment that leads to the effects of globalization. There is an increasing tendency for government policy to promote entrepreneurship for its apparent economic benefit. Accordingly, governments seek to employ entrepreneurship education as a means to stimulate increased levels of economic activity. Entrepreneurship education and training (EET) is growing rapidly in universities and colleges throughout the world, and governments are supporting it both directly and through funding major investments in advice-provision to would-be entrepreneurs and existing small businesses. The Triple Helix of university–industry–government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. To achieve the objective of this research was a survey of the literature on the entrepreneurship and innovation and then a field research with 100 students of Fatec. To collect the data needed for analysis, we used the exploratory research of a qualitative nature. We asked to respondents what degree of knowledge over ten related to entrepreneurship and innovation topics, responses were answered in a Likert scale with 4 levels, none, small, medium and large. We can conclude that the terms such as entrepreneurship and innovation are known by most students because the university propagates them across disciplines, lectures, and institutes innovation. The more specific items such as canvas and Design thinking model are unknown by most respondents. The importance of the University in teaching innovation and entrepreneurship in the transmission of this knowledge to the students in order to equalize the knowledge. As a future project, these items will be re-evaluated to create indicators for measuring the knowledge level.Keywords: Brazilian universities, entrepreneurship, innovation, entrepreneurship, globalization
Procedia PDF Downloads 507631 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings
Authors: J. N. Nackler, K. Saleh Pascha, W. Winter
Abstract:
WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate
Procedia PDF Downloads 219630 Spare Part Carbon Footprint Reduction with Reman Applications
Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş
Abstract:
Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability
Procedia PDF Downloads 81629 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics
Authors: Restituto I. Rodelas
Abstract:
The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.Keywords: outcome-based teaching, blended learning, face-to-face, student-centered
Procedia PDF Downloads 290