Search results for: children physical development
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22188

Search results for: children physical development

198 Overlaps and Intersections: An Alternative Look at Choreography

Authors: Ashlie Latiolais

Abstract:

Architecture, as a discipline, is on a trajectory of extension beyond the boundaries of buildings and, more increasingly, is coupled with research that connects to alternative and typically disjointed disciplines. A “both/and” approach and (expanded) definition of architecture, as depicted here, expands the margins that contain the profession. Figuratively, architecture is a series of edges, events, and occurrences that establishes a choreography or stage by which humanity exists. The way in which architecture controls and suggests the movement through these spaces, being within a landscape, city, or building, can be viewed as a datum by which the “dance” of everyday life occurs. This submission views the realm of architecture through the lens of movement and dance as a cross-fertilizer of collaboration, tectonic, and spatial geometry investigations. “Designing on digital programs puts architects at a distance from the spaces they imagine. While this has obvious advantages, it also means that they lose the lived, embodied experience of feeling what is needed in space—meaning that some design ideas that work in theory ultimately fail in practice.” By studying the body in motion through real-time performance, a more holistic understanding of architectural space surfaces and new prospects for theoretical teaching pedagogies emerge. The atypical intersection rethinks how architecture is considered, created, and tested, similar to how “dance artists often do this by thinking through the body, opening pathways and possibilities that might not otherwise be accessible” –this is the essence of this poster submission as explained through unFOLDED, a creative performance work. A new languageismaterialized through unFOLDED, a dynamic occupiable installation by which architecture is investigated through dance, movement, and body analysis. The entry unfolds a collaboration of an architect, dance choreographer, musicians, video artist, and lighting designers to re-create one of the first documented avant-garde performing arts collaborations (Matisse, Satie, Massine, Picasso) from the Ballet Russes in 1917, entitled Parade. Architecturally, this interdisciplinary project orients and suggests motion through structure, tectonic, lightness, darkness, and shadow as it questions the navigation of the dark space (stage) surrounding the installation. Artificial light via theatrical lighting and video graphics brought the blank canvas to life – where the sensitive mix of musicality coordinated with the structure’s movement sequencing was certainly a challenge. The upstage light from the video projections created both flickered contextual imagery and shadowed figures. When the dancers were either upstage or downstage of the structure, both silhouetted figures and revealed bodies are experienced as dancer-controlled installation manipulations occurred throughout the performance. The experimental performance, through structure, prompted moving (dancing) bodies in space, where the architecture served as a key component to the choreography itself. The tectonic of the delicate steel structure allowed for the dancers to interact with the installation, which created a variety of spatial conditions – the contained box of three-dimensional space, to a wall, and various abstracted geometries in between. The development of this research unveils the new role of an Architect as a Choreographer of the built environment.

Keywords: dance, architecture, choreography, installation, architect, choreographer, space

Procedia PDF Downloads 72
197 The Effects of Circadian Rhythms Change in High Latitudes

Authors: Ekaterina Zvorykina

Abstract:

Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.

Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors

Procedia PDF Downloads 129
196 Challenges and Proposals for Public Policies Aimed At Increasing Energy Efficiency in Low-Income Communities in Brazil: A Multi-Criteria Approach

Authors: Anna Carolina De Paula Sermarini, Rodrigo Flora Calili

Abstract:

Energy Efficiency (EE) needs investments, new technologies, greater awareness and management on the side of citizens and organizations, and more planning. However, this issue is usually remembered and discussed only in moments of energy crises, and opportunities are missed to take better advantage of the potential of EE in the various sectors of the economy. In addition, there is little concern about the subject among the less favored classes, especially in low-income communities. Accordingly, this article presents suggestions for public policies that aim to increase EE for low-income housing and communities based on international and national experiences. After reviewing the literature, eight policies were listed, and to evaluate them; a multicriteria decision model was developed using the AHP (Analytical Hierarchy Process) and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) methods, combined with fuzzy logic. Nine experts analyzed the policies according to 9 criteria: economic impact, social impact, environmental impact, previous experience, the difficulty of implementation, possibility/ease of monitoring and evaluating the policies, expected impact, political risks, and public governance and sustainability of the sector. The results found in order of preference are (i) Incentive program for equipment replacement; (ii) Community awareness program; (iii) EE Program with a greater focus on low income; (iv) Staggered and compulsory certification of social interest buildings; (v) Programs for the expansion of smart metering, energy monitoring and digitalization; (vi) Financing program for construction and retrofitting of houses with the emphasis on EE; (vii) Income tax deduction for investment in EE projects in low-income households made by companies; (viii) White certificates of energy for low-income. First, the policy of equipment substitution has been employed in Brazil and the world and has proven effective in promoting EE. For implementation, efforts are needed from the federal and state governments, which can encourage companies to reduce prices, and provide some type of aid for the purchase of such equipment. In second place is the community awareness program, promoting socio-educational actions on EE concepts and with energy conservation tips. This policy is simple to implement and has already been used by many distribution utilities in Brazil. It can be carried out through bids defined by the government in specific areas, being executed by third sector companies with public and private resources. Third on the list is the proposal to continue the Energy Efficiency Program (which obliges electric energy companies to allocate resources for research in the area) by suggesting the return of the mandatory investment of 60% of the resources in projects for low income. It is also relatively simple to implement, requiring efforts by the federal government to make it mandatory, and on the part of the distributors, compliance is needed. The success of the suggestions depends on changes in the established rules and efforts from the interested parties. For future work, we suggest the development of pilot projects in low-income communities in Brazil and the application of other multicriteria decision support methods to compare the results obtained in this study.

Keywords: energy efficiency, low-income community, public policy, multicriteria decision making

Procedia PDF Downloads 85
195 Superhydrophobic Materials: A Promising Way to Enhance Resilience of Electric System

Authors: M. Balordi, G. Santucci de Magistris, F. Pini, P. Marcacci

Abstract:

The increasing of extreme meteorological events represents the most important causes of damages and blackouts of the whole electric system. In particular, the icing on ground-wires and overheads lines, due to snowstorms or harsh winter conditions, very often gives rise to the collapse of cables and towers both in cold and warm climates. On the other hand, the high concentration of contaminants in the air, due to natural and/or antropic causes, is reflected in high levels of pollutants layered on glass and ceramic insulators, causing frequent and unpredictable flashover events. Overheads line and insulator failures lead to blackouts, dangerous and expensive maintenances and serious inefficiencies in the distribution service. Inducing superhydrophobic (SHP) properties to conductors, ground-wires and insulators, is one of the ways to face all these problems. Indeed, in some cases, the SHP surface can delay the ice nucleation time and decrease the ice nucleation temperature, preventing ice formation. Besides, thanks to the low surface energy, the adhesion force between ice and a superhydrophobic material are low and the ice can be easily detached from the surface. Moreover, it is well known that superhydrophobic surfaces can have self-cleaning properties: these hinder the deposition of pollution and decrease the probability of flashover phenomena. Here this study presents three different studies to impart superhydrophobicity to aluminum, zinc and glass specimens, which represent the main constituent materials of conductors, ground-wires and insulators, respectively. The route to impart the superhydrophobicity to the metallic surfaces can be summarized in a three-step process: 1) sandblasting treatment, 2) chemical-hydrothermal treatment and 3) coating deposition. The first step is required to create a micro-roughness. In the chemical-hydrothermal treatment a nano-scale metallic oxide (Al or Zn) is grown and, together with the sandblasting treatment, bring about a hierarchical micro-nano structure. By coating an alchilated or fluorinated siloxane coating, the surface energy decreases and gives rise to superhydrophobic surfaces. In order to functionalize the glass, different superhydrophobic powders, obtained by a sol-gel synthesis, were prepared. Further, the specimens were covered with a commercial primer and the powders were deposed on them. All the resulting metallic and glass surfaces showed a noticeable superhydrophobic behavior with a very high water contact angles (>150°) and a very low roll-off angles (<5°). The three optimized processes are fast, cheap and safe, and can be easily replicated on industrial scales. The anti-icing and self-cleaning properties of the surfaces were assessed with several indoor lab-tests that evidenced remarkable anti-icing properties and self-cleaning behavior with respect to the bare materials. Finally, to evaluate the anti-snow properties of the samples, some SHP specimens were exposed under real snow-fall events in the RSE outdoor test-facility located in Vinadio, western Alps: the coated samples delay the formation of the snow-sleeves and facilitate the detachment of the snow. The good results for both indoor and outdoor tests make these materials promising for further development in large scale applications.

Keywords: superhydrophobic coatings, anti-icing, self-cleaning, anti-snow, overheads lines

Procedia PDF Downloads 163
194 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 248
193 Using Participatory Action Research with Episodic Volunteers: Learning from Urban Agriculture Initiatives

Authors: Rebecca Laycock

Abstract:

Many Urban Agriculture (UA) initiatives, including community/allotment gardens, Community Supported Agriculture, and community/social farms, depend on volunteers. However, initiatives supported or run by volunteers are often faced with a high turnover of labour as a result of the involvement of episodic volunteers (a term describing ad hoc, one-time, and seasonal volunteers), leading to challenges with maintaining project continuity and retaining skills/knowledge within the initiative. This is a notable challenge given that food growing is a knowledge intensive activity where the fruits of labour appear months or sometimes years after investment. Participatory Action Research (PAR) is increasingly advocated for in the field of UA as a solution-oriented approach to research, providing concrete results in addition to advancing theory. PAR is a cyclical methodological approach involving researchers and stakeholders collaboratively 'identifying' and 'theorising' an issue, 'planning' an action to address said issue, 'taking action', and 'reflecting' on the process. Through iterative cycles and prolonged engagement, the theory is developed and actions become better tailored to the issue. The demand for PAR in UA research means that understanding how to use PAR with episodic volunteers is of critical importance. The aim of this paper is to explore (1) the challenges of doing PAR in UA initiatives with episodic volunteers, and (2) how PAR can be harnessed to advance sustainable development of UA through theoretically-informed action. A 2.5 year qualitative PAR study on three English case study student-led food growing initiatives took place between 2014 and 2016. University UA initiatives were chosen as exemplars because most of their volunteers were episodic. Data were collected through 13 interviews, 6 workshops, and a research diary. The results were thematically analysed through eclectic coding using Computer-Assisted Qualitative Data Analysis Software (NVivo). It was found that the challenges of doing PAR with transient participants were (1) a superficial understanding of issues by volunteers because of short term engagement, resulting in difficulties ‘identifying’/‘theorising’ issues to research; (2) difficulties implementing ‘actions’ given those involved in the ‘planning’ phase often left by the ‘action’ phase; (3) a lack of capacity of participants to engage in research given the ongoing challenge of maintaining participation; and (4) that the introduction of the researcher acted as an ‘intervention’. The involvement of a long-term stakeholder (the researcher) changed the group dynamics, prompted critical reflections that had not previously taken place, and improved continuity. This posed challenges for providing a genuine understanding the episodic volunteering PAR initiatives, and also challenged the notion of what constitutes an ‘intervention’ or ‘action’ in PAR. It is recommended that researchers working with episodic volunteers using PAR should (1) adopt a first-person approach by inquiring into the researcher’s own experience to enable depth in theoretical analysis to manage the potentially superficial understandings by short-term participants; and (2) establish safety mechanisms to address the potential for the research to impose artificial project continuity and knowledge retention that will end when the research does. Through these means, we can more effectively use PAR to conduct solution-oriented research about UA.

Keywords: community garden, continuity, first-person research, higher education, knowledge retention, project management, transience, university

Procedia PDF Downloads 228
192 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 127
191 Delicate Balance between Cardiac Stress and Protection: Role of Mitochondrial Proteins

Authors: Zuzana Tatarkova, Ivana Pilchova, Michal Cibulka, Martin Kolisek, Peter Racay, Peter Kaplan

Abstract:

Introduction: Normal functioning of mitochondria is crucial for cardiac performance. Mitochondria undergo mitophagy and biogenesis, and mitochondrial proteins are subject to extensive post-translational modifications. The state of mitochondrial homeostasis reflects overall cellular fitness and longevity. Perturbed mitochondria produce less ATP, release greater amounts of reactive molecules, and are more prone to apoptosis. Therefore mitochondrial turnover is an integral aspect of quality control in which dysfunctional mitochondria are selectively eliminated through mitophagy. Currently, the progressive deterioration of physiological functions is seen as accumulation of modified/damaged proteins with limiting regenerative ability and disturbance of such affected protein-protein communication throughout aging in myocardial cells. Methodologies: For our study was used immunohistochemistry, biochemical methods: spectrophotometry, western blotting, immunodetection as well as more sophisticated 2D electrophoresis and mass spectrometry for evaluation protein-protein interactions and specific post-translational modification. Results and Discussion: Mitochondrial stress response to reactive species was evaluated as electron transport chain (ETC) complexes, redox-active molecules, and their possible communication. Protein-protein interactions revealed a strong linkage between age and ETC protein subunits. Redox state was strongly affected in senescent mitochondria with shift in favor of more pro-oxidizing condition within cardiomyocytes. Acute myocardial ischemia and ischemia-reperfusion (IR) injury affected ETC complexes I, II and IV with no change in complex III. Ischemia induced decrease in total antioxidant capacity, MnSOD, GSH and catalase activity with recovery in some extent during reperfusion. While MnSOD protein content was higher in IR group, activity returned to 95% of control. Nitric oxide is one of the biological molecules that can out compete MnSOD for superoxide and produce peroxynitrite. This process is faster than dismutation and led to the 10-fold higher production of nitrotyrosine after IR injury in adult with higher protection in senescent ones. 2D protein profiling revealed 140 mitochondrial proteins, 12 of them with significant changes after IR injury and 36 individual nitrotyrosine-modified proteins further identified by mass spectrometry. Linking these two groups, 5 proteins were altered after IR as well as nitrated, but only one showed massive nitration per lowering content of protein after IR injury in adult. Conclusions: Senescent cells have greater proportion of protein content, which might be modulated by several post-translational modifications. If these protein modifications are connected to functional consequences and protein-protein interactions are revealed, link may lead to the solution. Assume all together, dysfunctional proteostasis can play a causative role and restoration of protein homeostasis machinery is protective against aging and possibly age-related disorders. This work was supported by the project VEGA 1/0018/18 and by project 'Competence Center for Research and Development in the field of Diagnostics and Therapy of Oncological diseases', ITMS: 26220220153, co-financed from EU sources.

Keywords: aging heart, mitochondria, proteomics, redox state

Procedia PDF Downloads 146
190 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 18
189 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study

Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane

Abstract:

Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.

Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms

Procedia PDF Downloads 114
188 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad

Authors: Prachi Buche Marathe

Abstract:

Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.

Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route

Procedia PDF Downloads 139
187 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 153
186 Making the Right Call for Falls: Evaluating the Efficacy of a Multi-Faceted Trust Wide Approach to Improving Patient Safety Post Falls

Authors: Jawaad Saleem, Hannah Wright, Peter Sommerville, Adrian Hopper

Abstract:

Introduction: Inpatient falls are the most commonly reported patient safety incidents, and carry a significant burden on resources, morbidity, and mortality. Ensuring adequate post falls management of patients by staff is therefore paramount to maintaining patient safety especially in out of hours and resource stretched settings. Aims: This quality improvement project aims to improve the current practice of falls management at Guys St Thomas Hospital, London as compared to our 2016 Quality Improvement Project findings. Furthermore, it looks to increase current junior doctors confidence in managing falls and their use of new guidance protocols. Methods: Multifaceted Interventions implemented included: the development of new trust wide guidelines detailing management pathways for patients post falls, available for intranet access. Furthermore, the production of 2000 lanyard cards distributed amongst junior doctors and staff which summarised these guidelines. Additionally, a ‘safety signal’ email was sent from the Trust chief medical officer to all staff raising awareness of falls and the guidelines. Formal falls teaching was also implemented for new doctors at induction. Using an established incident database, 189 consecutive falls in 2017were retrospectively analysed electronically to assess and compared to the variables measured in 2016 post interventions. A separate serious incident database was used to analyse 50 falls from May 2015 to March 2018 to ascertain the statistical significance of the impact of our interventions on serious incidents. A similar questionnaire for the 2017 cohort of foundation year one (FY1) doctors was performed and compared to 2016 results. Results: Questionnaire data demonstrated improved awareness and utility of guidelines and increased confidence as well as an increase in training. 97% of FY1 trainees felt that the interventions had increased their awareness of the impact of falls on patients in the trust. Data from the incident database demonstrated the time to review patients post fall had decreased from an average of 130 to 86 minutes. Improvement was also demonstrated in the reduced time to order and schedule X-ray and CT imaging, 3 and 5 hours respectively. Data from the serious incident database show that ‘the time from fall until harm was detected’ was statistically significantly lower (P = 0.044) post intervention. We also showed the incidence of significant delays in detecting harm ( > 10 hours) reduced post intervention. Conclusions: Our interventions have helped to significantly reduce the average time to assess, order and schedule appropriate imaging post falls. Delays of over ten hours to detect serious injuries after falls were commonplace; since the intervention, their frequency has markedly reduced. We suggest this will lead to identifying patient harm sooner, reduced clinical incidents relating to falls and thus improve overall patient safety. Our interventions have also helped increase clinical staff confidence, management, and awareness of falls in the trust. Next steps include expanding teaching sessions, improving multidisciplinary team involvement to aid this improvement.

Keywords: patient safety, quality improvement, serious incidents, falls, clinical care

Procedia PDF Downloads 105
185 Admissibility as a Property of Evidence in Modern Conditions

Authors: Iryna Teslenko

Abstract:

According to the provisions of the current criminal procedural legislation of Ukraine, the issue of admissibility of evidence is closely related to both the right to a fair trial and the presumption of innocence. The general rule is that evidence obtained improperly or illegally cannot be taken into account in a court case. Therefore, the evidence base of the prosecution, collected at the stage of the pre-trial investigation, compliance with the requirements of the law during the collection of evidence, is of crucial importance for the criminal process, the violation of which entails the recognition of the relevant evidence as inadmissible, which can nullify all the efforts of the pre-trial investigation body and the prosecution. Therefore, the issue of admissibility of evidence in criminal proceedings is fundamentally important and decisive for the entire process. Research on this issue began in December 2021. At that time, there was still no clear understanding of what needed to be conveyed to the scientific community. In February 2022, the lives of all citizens of Ukraine have totally changed. A war broke out in the country. At a time when the entire world community is on the path of humanizing society, respecting the rights and freedoms of man and citizen, a military conflict has arisen in the middle of Europe - one country attacked another, war crimes are being committed. The world still cannot believe it, but it is happening here and now, people are dying, infrastructure is being destroyed, war crimes are being committed, contrary to the signed and ratified international conventions, and contrary to all the acquisitions and development of world law. At this time, the life of the world has divided into before and after February 24, 2022, the world cannot be the same as it was before, and the approach to solving legal issues in the criminal process, in particular, issues of proving the commission of crimes and the involvement of certain persons in their commission. An international criminal has appeared in the humane European world, who disregards all norms of law and morality, and does not adhere to any principles. Until now, the practice of the European Court of Human Rights and domestic courts of Ukraine treated with certain formalism, such a property of evidence in criminal proceedings as the admissibility of evidence. Currently, we have information that the Office of the Prosecutor of the International Criminal Court in The Hague has started an investigation into war crimes in Ukraine and is documenting them. In our opinion, the world cannot allow formalism in bringing a war criminal to justice. There is a war going on in Ukraine, the cities are under round-the-clock missile fire from the aggressor country, which makes it impossible to carry out certain investigative actions. If due to formal deficiencies, the collected evidence is declared inadmissible, it may lead to the fact that the guilty people will not be punished. And this, in turn, sends a message to other terrorists in the world about the impunity of their actions, the system of deterring criminals from committing criminal offenses (crimes) will collapse due to the understanding of the inevitability of punishment, and this will affect the entire world security and European security in particular. Therefore, we believe that the world cannot allow chaos in the issue of general security, there should be a transformation of the approach in general to such a property of evidence in the criminal process as admissibility in order to ensure the inevitability of the punishment of criminals. We believe that the scientific and legal community should not allow criminals to avoid responsibility. The evil that is destroying Ukraine should be punished. We must all together prove that legal norms are not just words written on paper but rules of behavior of all members of society, their non-observance leads to mandatory responsibility. Everybody who commits crimes will be punished, which is inevitable, and this principle is the guarantor of world security in the future.

Keywords: admissibility of evidence, criminal process, war, Ukraine

Procedia PDF Downloads 66
184 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 291
183 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 168
182 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach

Authors: Rupesh S. Gundewar, Kanchan C. Khare

Abstract:

Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.

Keywords: runoff, urbanization, impermeable response, flooding

Procedia PDF Downloads 233
181 Brain-Derived Neurotrophic Factor and It's Precursor ProBDNF Serum Levels in Adolescents with Mood Disorders: 2-Year Follow-Up Study

Authors: M. Skibinska, A. Rajewska-Rager, M. Dmitrzak-Weglarz, N. Lepczynska, P. Sibilski, P. Kapelski, J. Pawlak, J. Twarowska-Hauser

Abstract:

Introduction: Neurotrophic factors have been implicated in neuropsychiatric disorders. Brain-Derived Neurotrophic Factor (BDNF) influences neuron differentiation in development as well as synaptic plasticity and neuron survival in adulthood. BDNF is widely studied in mood disorders and has been proposed as a biomarker for depression. BDNF is synthesized as precursor protein – proBDNF. Both forms are biologically active and exert opposite effects on neurons. Aim: The aim of the study was to examine the serum levels of BDNF and proBDNF in unipolar and bipolar young patients below 24 years old during hypo/manic, depressive episodes and in remission compared to healthy control group. Methods: In a prospective 2 years follow-up study, we investigated alterations in levels of BDNF and proBDNF in 79 patients (23 males, mean age 19.08, SD 3.3 and 56 females, mean age 18.39, SD 3.28) diagnosed with mood disorders: unipolar and bipolar disorder compared with 35 healthy control subjects (7 males, mean age 20.43, SD 4.23 and 28 females, mean age 21.25, SD 2.11). Clinical characteristics including mood, comorbidity, family history, and treatment, were evaluated during control visits and clinical symptoms were rated using the Hamilton Depression Rating Scale and Young Mania Rating Scale. Serum BDNF and proBDNF concentrations were determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Serum BDNF and proBDNF levels were analysed with covariates: sex, age, age > 18 and < 18 years old, family history of affective disorders, drug-free vs. medicated status. Normality of the data was tested using Shapiro-Wilk test. Levene’s test was used to calculate homogeneity of variance. Non-parametric Tests: Mann-Whitney U test, Kruskal-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation coefficient were applied in analyses The statistical significance level was set at p < 0.05. Results: BDNF and proBDNF serum levels did not differ between patients at baseline and controls as well as comparing patients in acute episode of depression/hypo/mania at baseline and euthymia (at month 3 or 6). Comparing BDNF and proBDNF levels between patients in euthymia and control group no differences have been found. Increased BDNF level in women compared to men at baseline (p=0.01) have been observed. BDNF level at baseline was negatively correlated with depression and mania occurence at 24 month (p=0.04). BDNF level at 12 month was negatively correlated with depression and mania occurence at 12 month (p=0.01). Correlation of BDNF level with sex have been detected (p=0.01). proBDNF levels at month 3, 6 and 12 negatively correlated with disease status (p=0.02, p=0.008, p=0.009, respectively). No other correlations of BDNF and proBDNF levels with clinical and demographical variables have been detected. Discussion: Our results did not show any differences in BDNF and proBDNF levels between depression, mania, euthymia, and controls. Imbalance in BDNF/proBDNF signalling may be involved in pathogenesis of mood disorders. Further studies on larger groups are recommended. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.

Keywords: bipolar disorder, Brain-Derived Neurotrophic Factor (BDNF), proBDNF, unipolar depression

Procedia PDF Downloads 224
180 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 45
179 Overview of Research Contexts about XR Technologies in Architectural Practice

Authors: Adeline Stals

Abstract:

The transformation of architectural design practices has been underway for almost forty years due to the development and democratization of computer technology. New and more efficient tools are constantly being proposed to architects, amplifying a technological wave that sometimes stimulates them, sometimes overwhelms them, depending essentially on their digital culture and the context (socio-economic, structural, organizational) in which they work on a daily basis. Our focus is on VR, AR, and MR technologies dedicated to architecture. The commercialization of affordable headsets like the Oculus Rift, the HTC Vive or more low-tech like the Google CardBoard, makes it more accessible to benefit from these technologies. In that regard, researchers report the growing interest of these tools for architects, given the new perspectives they open up in terms of workflow, representation, collaboration, and client’s involvement. However, studies rarely mention the consequences of the sample studied on results. Our research provides an overview of VR, AR, and MR researches among a corpus of papers selected from conferences and journals. A closer look at the sample of these research projects highlights the necessity to take into consideration the context of studies in order to develop tools truly dedicated to the real practices of specific architect profiles. This literature review formalizes milestones for future challenges to address. The methodology applied is based on a systematic review of two sources of publications. The first one is the Cumincad database, which regroups publications from conferences exclusively about digital in architecture. Additionally, the second part of the corpus is based on journal publications. Journals have been selected considering their ranking on Scimago. Among the journals in the predefined category ‘architecture’ and in Quartile 1 for 2018 (last update when consulted), we have retained the ones related to the architectural design process: Design Studies, CoDesign, Architectural Science Review, Frontiers of Architectural Research and Archnet-IJAR. Beside those journals, IJAC, not classified in the ‘architecture’ category, is selected by the author for its adequacy with architecture and computing. For all requests, the search terms were ‘virtual reality’, ‘augmented reality’, and ‘mixed reality’ in title and/or keywords for papers published between 2015 and 2019 (included). This frame time is defined considering the fast evolution of these technologies in the past few years. Accordingly, the systematic review covers 202 publications. The literature review on studies about XR technologies establishes the state of the art of the current situation. It highlights that studies are mostly based on experimental contexts with controlled conditions (pedagogical, e.g.) or on practices established in large architectural offices of international renown. However, few studies focus on the strategies and practices developed by offices of smaller size, which represent the largest part of the market. Indeed, a European survey studying the architectural profession in Europe in 2018 reveals that 99% of offices are composed of less than ten people, and 71% of only one person. The study also showed that the number of medium-sized offices is continuously decreasing in favour of smaller structures. In doing so, a frontier seems to remain between the worlds of research and practice, especially for the majority of small architectural practices having a modest use of technology. This paper constitutes a reference for the next step of the research and for further worldwide researches by facilitating their contextualization.

Keywords: architectural design, literature review, SME, XR technologies

Procedia PDF Downloads 90
178 Foucault and Governmentality: International Organizations and State Power

Authors: Sara Dragisic

Abstract:

Using the theoretical analysis of the birth of biopolitics that Foucault performed through the history of liberalism and neoliberalism, in this paper we will try to show how, precisely through problematizing the role of international institutions, the model of governance differs from previous ways of objectifying body and life. Are the state and its mechanisms still a Leviathan to fight against, or can it be even the driver of resistance against the proponents of modern governance and the biopolitical power? Do paradigmatic examples of biopolitics still appear through sovereignty and (international) law, or is it precisely this sphere that shows a significant dose of incompetence and powerlessness in relation to, not only the economic sphere (Foucault’s critique of neoliberalism) but also the new politics of freedom? Have the struggle for freedom and human rights, as well as the war on terrorism, opened a new spectrum of biopolitical processes, which are manifested precisely through new international institutions and humanitarian discourse? We will try to answer these questions, in the following way. On the one hand, we will show that the views of authors such as Agamben and Hardt and Negri, in whom the state and sovereignty are seen as enemies to be defeated or overcome, fail to see how such attempts could translate into the politicization of life like it is done in many examples through the doctrine of liberal interventionism and humanitarianism. On the other hand, we will point out that it is precisely the humanitarian discourse and the defense of the right to intervention that can be the incentive and basis for the politicization of the category of life and lead to the selective application of human rights. Zizek example of the killing of United Nations workers and doctors in a village during the Vietnam War, who were targeted even before police or soldiers, because they were precisely seen as a powerful instrument of American imperialism (as they were sincerely trying to help the population), will be focus of this part of the analysis. We’ll ask the question whether such interpretation is a kind of liquidation of the extreme left of the political (Laclau) or on this basis can be explained at least in part the need to review the functioning of international organizations, ranging from those dealing with humanitarian aid (and humanitarian military interventions) to those dealing with protection and the security of the population, primarily from growing terrorism. Based on the above examples, we will also explain how the discourse of terrorism itself plays a dual role: it can appear as a tool of liberal biopolitics, although, more superficially, it mostly appears as an enemy that wants to destroy the liberal system and its values. This brings us to the basic problem that this paper will tackle: do the mechanisms of institutional struggle for human rights and freedoms, which is often seen as opposed to the security mechanisms of the state, serve the governance of citizens in such a way that the latter themselves participate in producing biopolitical governmental practices? Is the freedom today "nothing but the correlative development of apparatuses of security" (Foucault)? Or, we can continue this line of Foucault’s argumentation and enhance the interpretation with the important question of what precisely today reflects the change in the rationality of governance in which society is transformed from a passive object into a subject of its own production. Finally, in order to understand the skills of biopolitical governance in modern civil society, it is necessary to pay attention to the status of international organizations, which seem to have become a significant place for the implementation of global governance. In this sense, the power of sovereignty can turn out to be an insufficiently strong power of security policy, which can go hand in hand with freedom policies, through neoliberal governmental techniques.

Keywords: neoliberalism, Foucault, sovereignty, biopolitics, international organizations, NGOs, Agamben, Hardt&Negri, Zizek, security, state power

Procedia PDF Downloads 180
177 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health

Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik

Abstract:

Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.

Keywords: ecology, morbidity, population, lag time

Procedia PDF Downloads 57
176 Pedagogical Opportunities of Physics Education Technology Interactive Simulations for Secondary Science Education in Bangladesh

Authors: Mohosina Jabin Toma, Gerald Tembrevilla, Marina Milner-Bolotin

Abstract:

Science education in Bangladesh is losing its appeal at an alarming rate due to the lack of science laboratory equipment, excessive teacher-student ratio, and outdated teaching strategies. Research-based educational technologies aim to address some of the problems faced by teachers who have limited access to laboratory resources, like many Bangladeshi teachers. Physics Education Technology (PhET) research team has been developing science and mathematics interactive simulations to help students develop deeper conceptual understanding. Still, PhET simulations are rarely used in Bangladesh. The purpose of this study is to explore Bangladeshi teachers’ challenges in learning to implement PhET-enhanced pedagogies and examine teachers’ views on PhET’s pedagogical opportunities in secondary science education. Since it is a new technology for Bangladesh, seven workshops on PhET were conducted in Dhaka city for 129 in-service and pre-service teachers in the winter of 2023 prior to data collection. This study followed an explanatory mixed method approach that included a pre-and post-workshop survey and five semi-structured interviews. Teachers participated in the workshops voluntarily and shared their experiences at the end. Teachers’ challenges were also identified from workshop discussions and observations. The interviews took place three to four weeks after the workshop and shed light on teachers’ experiences of using PhET in actual classroom settings. The results suggest that teachers had difficulty handling new technology; hence, they recommended preparing a booklet and Bengali YouTube videos on PhET to assist them in overcoming their struggles. Teachers also faced challenges in using any inquiry-based learning approach due to the content-loaded curriculum and exam-oriented education system, as well as limited experience with inquiry-based education. The short duration of classes makes it difficult for them to design PhET activities. Furthermore, considering limited access to computers and the internet in school, teachers think PhET simulations can bring positive changes if used in homework activities. Teachers also think they lack pedagogical skills and sound content knowledge to take full advantage of PhET. They highly appreciated the workshops and proposed that the government designs some teacher training modules on how to incorporate PhET simulations. Despite all the challenges, teachers believe PhET can enhance student learning, ensure student engagement and increase student interest in STEM Education. Considering the lack of science laboratory equipment, teachers recognized the potential of PhET as a supplement to hands-on activities for secondary science education in Bangladesh. They believed that if PhET develops more curriculum-relevant sims, it will bring revolutionary changes to how Bangladeshi students learn science. All the participating teachers in this study came from two organizations, and all the workshops took place in urban areas; therefore, the findings cannot be generalized to all secondary science teachers. A nationwide study is required to include teachers from diverse backgrounds. A further study can shed light on how building a professional learning community can lessen teachers’ challenges in incorporating PhET-enhanced pedagogy in their teaching.

Keywords: educational technology, inquiry-based learning, PhET interactive simulations, PhET-enhanced pedagogies, science education, science laboratory equipment, teacher professional development

Procedia PDF Downloads 62
175 Role of Toll Like Receptor-2 in Female Genital Tuberculosis Disease Infection and Its Severity

Authors: Swati Gautam, Salman Akhtar, S. P. Jaiswar, Amita Jain

Abstract:

Background: FGTB is now a major global health problem mostly in developing countries including India. In humans, Mycobacterium Tuberculosis (M.tb) is a causating agent of infection. High index of suspicion is required for early diagnosis due to asymptomatic presentation of FGTB disease. In macrophages Toll Like Receptor-2 (TLR-2) is one which mediated host’s immune response to M.tb. The expression of TLR-2 on macrophages is important to determine the fate of innate immune responses to M.tb. TLR-2 have two work. First its high expression on macrophages worsen the outer of infection and another side, it maintains M.tb to its dormant stage avoids activation of M.tb from latent phase. Single Nucleotide Polymorphism (SNP) of TLR-2 gene plays an important role in susceptibility to TB among different populations and subsequently, in the development of infertility. Methodology: This Case-Control study was done in the Department of Obs and Gynae and Department of Microbiology at King George’s Medical University, U.P, Lucknow, India. Total 300 subjects (150 Cases and 150 Controls) were enrolled in the study. All subjects were enrolled only after fulfilling the given inclusion and exclusion criteria. Inclusion criteria: Age 20-35 years, menstrual-irregularities, positive on Acid-Fast Bacilli (AFB), TB-PCR, (LJ/MGIT) culture in Endometrial Aspiration (EA). Exclusion criteria: Koch’s active, on ATT, PCOS, and Endometriosis fibroid women, positive on Gonococal and Chlamydia. Blood samples were collected in EDTA tubes from cases and healthy control women (HCW) and genomic DNA extraction was carried out by salting-out method. Genotyping of TLR2 genetic variants (Arg753Gln and Arg677Trp) were performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 1.2% agarose gel and visualized by gel-doc. Statistical analysis of the data was performed using the SPSS 16.3 software and computing odds ratio (OR) with 95% CI. Linkage Disequiliribium (LD) analysis was done by SNP stats online software. Results: In TLR-2 (Arg753Gln) polymorphism significant risk of FGTB observed with GG homozygous mutant genotype (OR=13, CI=0.71-237.7, p=0.05), AG heterozygous mutant genotype (OR=13.7, CI=0.76-248.06, p=0.03) however, G allele (OR=1.09, CI=0.78-1.52, p=0.67) individually was not associated with FGTB. In TLR-2 (Arg677Trp) polymorphism a significant risk of FGTB observed with TT homozygous mutant genotype (OR= 0.020, CI=0.001-0.341, p < 0.001), CT heterozygous mutant genotype (OR=0.53, CI=0.33-0.86, p=0.014) and T allele (OR=0.463, CI=0.32-0.66, p < 0.001). TT mutant genotype was only found in FGTB cases and frequency of CT heterozygous more in control group as compared to FGTB group. So, CT genotype worked as protective mutation for FGTB susceptibility group. In haplotype analysis of TLR-2 genetic variants, four possible combinations, i.e. (G-T, A-C, G-C, and A-T) were obtained. The frequency of haplotype A-C was significantly higher in FGTB cases (0.32). Control group did not show A-C haplotype and only found in FGTB cases. Conclusion: In conclusion, study showed a significant association with both genetic variants of TLR-2 of FGTB disease. Moreover, the presence of specific associated genotype/alleles suggest the possibility of disease severity and clinical approach aimed to prevent extensive damage by disease and also helpful for early detection of disease.

Keywords: ARMS, EDTA, FGTB, TLR

Procedia PDF Downloads 271
174 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts

Authors: Shanhua Hu

Abstract:

Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.

Keywords: engagement, official account, promotion, twitter, video game

Procedia PDF Downloads 53
173 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing

Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay

Abstract:

Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.

Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer

Procedia PDF Downloads 104
172 Flexural Response of Sandwiches with Micro Lattice Cores Manufactured via Selective Laser Sintering

Authors: Emre Kara, Ali Kurşun, Halil Aykul

Abstract:

The lightweight sandwiches obtained with the use of various core materials such as foams, honeycomb, lattice structures etc., which have high energy absorbing capacity and high strength to weight ratio, are suitable for several applications in transport industry (automotive, aerospace, shipbuilding industry) where saving of fuel consumption, load carrying capacity increase, safety of vehicles and decrease of emission of harmful gases are very important aspects. While the sandwich structures with foams and honeycombs have been applied for many years, there is a growing interest on a new generation sandwiches with micro lattice cores. In order to produce these core structures, various production methods were created with the development of the technology. One of these production technologies is an additive manufacturing technique called selective laser sintering/melting (SLS/SLM) which is very popular nowadays because of saving of production time and achieving the production of complex topologies. The static bending and the dynamic low velocity impact tests of the sandwiches with carbon fiber/epoxy skins and the micro lattice cores produced via SLS/SLM were already reported in just a few studies. The goal of this investigation was the analysis of the flexural response of the sandwiches consisting of glass fiber reinforced plastic (GFRP) skins and the micro lattice cores manufactured via SLS under thermo-mechanical loads in order to compare the results in terms of peak load and absorbed energy values respect to the effect of core cell size, temperature and support span length. The micro lattice cores were manufactured using SLS technology that creates the product drawn by a 3D computer aided design (CAD) software. The lattice cores which were designed as body centered cubic (BCC) model having two different cell sizes (d= 2 and 2.5 mm) with the strut diameter of 0.3 mm were produced using titanium alloy (Ti6Al4V) powder. During the production of all the core materials, the same production parameters such as laser power, laser beam diameter, building direction etc. were kept constant. Vacuum Infusion (VI) method was used to produce skin materials, made of [0°/90°] woven S-Glass prepreg laminates. The combination of the core and skins were implemented under VI. Three point bending tests were carried out by a servo-hydraulic test machine with different values of support span distances (L = 30, 45, and 60 mm) under various temperature values (T = 23, 40 and 60 °C) in order to analyze the influences of support span and temperature values. The failure mode of the collapsed sandwiches has been investigated using 3D computed tomography (CT) that allows a three-dimensional reconstruction of the analyzed object. The main results of the bending tests are: load-deflection curves, peak force and absorbed energy values. The results were compared according to the effect of cell size, support span and temperature values. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry, where problems of collision and crash have increased in the last years.

Keywords: light-weight sandwich structures, micro lattice cores, selective laser sintering, transport application

Procedia PDF Downloads 316
171 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 39
170 Female Entrepreneurship in the Creative Industry: The Antecedents of Their Ventures' Performance

Authors: Naoum Mylonas, Eugenia Petridou

Abstract:

Objectives: The objectives of this research are firstly, to develop an integrated model of predicting factors to new ventures performance, taking into account certain issues and specificities related to creative industry and female entrepreneurship based on the prior research; secondly, to determine the appropriate measures of venture performance in a creative industry context, drawing upon previous surveys; thirdly, to illustrate the importance of entrepreneurial orientation, networking ties, environment dynamism and access to financial capital on new ventures performance. Prior Work: An extant review of the creative industry literature highlights the special nature of entrepreneurship in this field. Entrepreneurs in creative industry share certain specific characteristics and intensions, such as to produce something aesthetic, to enrich their talents and their creativity, and to combine their entrepreneurial with their artistic orientation. Thus, assessing venture performance and success in creative industry entails an examination of how creative people or artists conceptualize success. Moreover, female entrepreneurs manifest more positive attitudes towards sectors primarily based on creativity, rather than innovation in which males outbalance. As creative industry entrepreneurship based mainly on the creative personality of the creator / artist, a high interest is accrued to examine female entrepreneurship in the creative industry. Hypotheses development: H1a: Female entrepreneurs who are more entrepreneurially-oriented show a higher financial performance. H1b: Female entrepreneurs who are more artistically-oriented show a higher creative performance. H2: Female entrepreneurs who have personality that is more creative perform better. H3: Female entrepreneurs who participate in or belong to networks perform better. H4: Female entrepreneurs who have been consulted by a mentor perform better. Η5a: Female entrepreneurs who are motivated more by pull-factors perform better. H5b: Female entrepreneurs who are motivated more by push-factors perform worse. Approach: A mixed method triangulation design has been adopted for the collection and analysis of data. The data are collected through a structured questionnaire for the quantitative part and through semi-structured interviews for the qualitative part as well. The sample is 293 Greek female entrepreneurs in the creative industry. Main findings: All research hypotheses are accepted. The majority of creative industry entrepreneurs evaluate themselves in creative performance terms rather than financial ones. The individuals who are closely related to traditional arts sectors have no EO but also evaluate themselves highly in terms of venture performance. Creative personality of creators is appeared as the most important predictor of venture performance. Pull factors in accordance with our hypothesis lead to higher levels of performance compared to push factors. Networking and mentoring are viewed as very important, particularly now during the turbulent economic environment in Greece. Implications-Value: Our research provides an integrated model with several moderating variables to predict ventures performance in the creative industry, taking also into account the complicated nature of arts and the way artists and creators define success. At the end, the findings may be used for the appropriate design of educational programs in creative industry entrepreneurship. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Heracleitus II. Investing in knowledge society through the European Social Fund.

Keywords: venture performance, female entrepreneurship, creative industry, networks

Procedia PDF Downloads 242
169 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Authors: Pedro Llanos, Diego García

Abstract:

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts

Procedia PDF Downloads 104