Search results for: honeycomb sandwich panel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1059

Search results for: honeycomb sandwich panel

69 Highly Conducting Ultra Nanocrystalline Diamond Nanowires Decorated ZnO Nanorods for Long Life Electronic Display and Photo-Detectors Applications

Authors: A. Saravanan, B. R. Huang, C. J. Yeh, K. C. Leou, I. N. Lin

Abstract:

A new class of ultra-nano diamond-graphite nano-hybrid (DGH) composite materials containing nano-sized diamond needles was developed at low temperature process. Such kind of diamond- graphite nano-hybrid composite nanowires exhibit high electrical conductivity and excellent electron field emission (EFE) properties. Few earlier reports mention that addition of N2 gas to the growth plasma requires high growth temperature (800°C) to trigger the dopants to generate the conductivity in the films. High growth temperature is not familiar with the Si-based device fabrications. We have used a novel process such as bias-enhanced-grown (beg) MPECVD process to grow diamond films at low substrate temperature (450°C). We observed that the beg-N/UNCD films thus obtained possess high conductivity of σ=987 S/cm, ever reported for diamond films with excellent Electron field emission (EFE) properties. TEM investigation indicated that these films contain needle-like diamond grains about 5 nm in diameter and hundreds of nanometers in length. Each of the grains was encased in graphitic layers about tens of nano-meters in thickness. These materials properties suitable for more specific applications, such as high conductivity for electron field emitters, high robustness for microplasma cathodes and high electrochemical activity for electro-chemical sensing. Subsequently, other hand, the highly conducting DGH films were coated on vertically aligned ZnO nanorods, there is no prior nucleation or seeding process needed due to the use of BEG method. Such a composite structure provides significant enhancement in the field emission characteristics of the cold cathode was observed with ultralow turn on voltage 1.78 V/μm with high EFE current density of 3.68 mA/ cm2 (at 4.06V/μm) due to decoration of DGH material on ZnO nanorods. The DGH/ZNRs based device get stable emission for longer duration of 562min than bare ZNRs (104min) without any current degradation because the diamond coating protects the ZNRs from ion bombardment when they are used as the cathode for microplasma devices. The potential application of these materials is demonstrated by the plasma illumination measurements that ignited the plasma at the minimum voltage by 290 V. The photoresponse (Iphoto/Idark) behavior of the DGH/ZNRs based photodetectors exhibits a much higher photoresponse (1202) than bare ZNRs (229). During the process the electron transport is easy from ZNRs to DGH through graphitic layers, the EFE properties of these materials comparable to other primarily used field emitters like carbon nanotubes, graphene. The DGH/ZNRs composite also providing a possibility of their use in flat panel, microplasma and vacuum microelectronic devices.

Keywords: bias-enhanced nucleation and growth, ZnO nanorods, electrical conductivity, electron field emission, photo-detectors

Procedia PDF Downloads 344
68 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 31
67 Dys-Regulation of Immune and Inflammatory Response in in vitro Fertilization Implantation Failure Patients under Ovarian Stimulation

Authors: Amruta D. S. Pathare, Indira Hinduja, Kusum Zaveri

Abstract:

Implantation failure (IF) even after the good-quality embryo transfer (ET) in the physiologically normal endometrium is the main obstacle in in vitro fertilization (IVF). Various microarray studies have been performed worldwide to elucidate the genes requisite for endometrial receptivity. These studies have included the population based on different phases of menstrual cycle during natural cycle and stimulated cycle in normal fertile women. Additionally, the literature is also available in recurrent implantation failure patients versus oocyte donors in natural cycle. However, for the first time, we aim to study the genomics of endometrial receptivity in IF patients under controlled ovarian stimulation (COS) during which ET is generally practised in IVF. Endometrial gene expression profiling in IF patients (n=10) and oocyte donors (n=8) were compared during window of implantation under COS by whole genome microarray (using Illumina platform). Enrichment analysis of microarray data was performed to determine dys-regulated biological functions and pathways using Database for Annotation, Visualization and Integrated Discovery, v6.8 (DAVID). The enrichment mapping was performed with the help of Cytoscape software. Microarray results were validated by real-time PCR. Localization of genes related to immune response (Progestagen-Associated Endometrial Protein (PAEP), Leukaemia Inhibitory Factor (LIF), Interleukin-6 Signal Transducer (IL6ST) was detected by immunohistochemistry. The study revealed 418 genes downregulated and 519 genes upregulated in IF patients compared to healthy fertile controls. The gene ontology, pathway analysis and enrichment mapping revealed significant downregulation in activation and regulation of immune and inflammation response in IF patients under COS. The lower expression of Progestagen Associated Endometrial Protein (PAEP), Leukemia Inhibitory Factor (LIF) and Interleukin 6 Signal Transducer (IL6ST) in cases compared to controls by real time and immunohistochemistry suggests the functional importance of these genes. The study was proved useful to uncover the probable reason of implantation failure being imbalance of immune and inflammatory regulation in our group of subjects. Based on the present study findings, a panel of significant dysregulated genes related to immune and inflammatory pathways needs to be further substantiated in larger cohort in natural as well as stimulated cycle. Upon which these genes could be screened in IF patients during window of implantation (WOI) before going for embryo transfer or any other immunological treatment. This would help to estimate the regulation of specific immune response during WOI in a patient. The appropriate treatment of either activation of immune response or suppression of immune response can be then attempted in IF patients to enhance the receptivity of endometrium.

Keywords: endometrial receptivity, immune and inflammatory response, gene expression microarray, window of implantation

Procedia PDF Downloads 124
66 Belarus Rivers Runoff: Current State, Prospects

Authors: Aliaksandr Volchak, Мaryna Barushka

Abstract:

The territory of Belarus is studied quite well in terms of hydrology but runoff fluctuations over time require more detailed research in order to forecast changes in rivers runoff in future. Generally, river runoff is shaped by natural climatic factors, but man-induced impact has become so big lately that it can be compared to natural processes in forming runoffs. In Belarus, a heavy man load on the environment was caused by large-scale land reclamation in the 1960s. Lands of southern Belarus were reclaimed most, which contributed to changes in runoff. Besides, global warming influences runoff. Today we observe increase in air temperature, decrease in precipitation, changes in wind velocity and direction. These result from cyclic climate fluctuations and, to some extent, the growth of concentration of greenhouse gases in the air. Climate change affects Belarus’s water resources in different ways: in hydropower industry, other water-consuming industries, water transportation, agriculture, risks of floods. In this research we have done an assessment of river runoff according to the scenarios of climate change and global climate forecast presented in the 4th and 5th Assessment Reports conducted by Intergovernmental Panel on Climate Change (IPCC) and later specified and adjusted by experts from Vilnius Gediminas Technical University with the use of a regional climatic model. In order to forecast changes in climate and runoff, we analyzed their changes from 1962 up to now. This period is divided into two: from 1986 up to now in comparison with the changes observed from 1961 to 1985. Such a division is a common world-wide practice. The assessment has revealed that, on the average, changes in runoff are insignificant all over the country, even with its irrelevant increase by 0.5 – 4.0% in the catchments of the Western Dvina River and north-eastern part of the Dnieper River. However, changes in runoff have become more irregular both in terms of the catchment area and inter-annual distribution over seasons and river lengths. Rivers in southern Belarus (the Pripyat, the Western Bug, the Dnieper, the Neman) experience reduction of runoff all year round, except for winter, when their runoff increases. The Western Bug catchment is an exception because its runoff reduces all year round. Significant changes are observed in spring. Runoff of spring floods reduces but the flood comes much earlier. There are different trends in runoff changes in spring, summer, and autumn. Particularly in summer, we observe runoff reduction in the south and west of Belarus, with its growth in the north and north-east. Our forecast of runoff up to 2035 confirms the trend revealed in 1961 – 2015. According to it, in the future, there will be a strong difference between northern and southern Belarus, between small and big rivers. Although we predict irrelevant changes in runoff, it is quite possible that they will be uneven in terms of seasons or particular months. Especially, runoff can change in summer, but decrease in the rest seasons in the south of Belarus, whereas in the northern part the runoff is predicted to change insignificantly.

Keywords: assessment, climate fluctuation, forecast, river runoff

Procedia PDF Downloads 101
65 A Strategy to Reduce Salt Intake: The Use of a Seasoning Obtained from Wine Pomace

Authors: María Luisa Gonzalez-SanJose, Javier Garcia-Lomillo, Raquel Del Pino, Miriam Ortega-Heras, Maria Dolores Rivero-Perez, Pilar Muñiz-Rodriguez

Abstract:

One of the most preoccupant problems related to the diet of the occidental societies is the high salt intake. In Spain, salt intake is almost twice as recommended by the World Health Organization (WHO). A lot of negative health effects of high sodium intake have been described being the hypertension, cardiovascular and coronary diseases ones of the most important. Due to this fact, government and other institutions are working on the gradual reduction of this consumption. Intake of meat products have been described as the main processed products that bring salt to the diet, followed by snacks and savory crackers. However, fortunately, the food industry has also raised awareness of this problem and is working intensely, and in recent years attempts to reduce the salt content in processed products, and is developing special lines with low sodium content. It is important to consider that processed food are the main source of sodium in occidental countries. One of the possible strategies to reduce the salt content in food is to find substitutes that can emulate their taste properties without adding much sodium or products that mask or substitute salty sensations with other flavors and aromas. In this sense, multiple products have been proposed and used until now. Potassium salts produce similar salty sensations without bring sodium, however their intake should be also limited, by healthy reasons. Furthermore, some potassium salts shows some better notes. Other alternatives are the use of flavor enhancers, spices, aromatic herbs, sea-plant derivate products, etc. The wine pomace is rich in potassium salts, content organic acid and other flavored substances, therefore it could be an interesting raw material to obtain derived products that could be useful as alternative ‘seasonings’. Considering previous comments, the main aim of this study was to evaluate the possible use of a natural seasoning, made from red wine pomace, in two different foods, crackers and burgers. The seasoning was made in the pilot plant of food technology of the University of Burgos, where the studied crackers and patties were also made. Different members of the University, students, docent and administrative personal, taste the products, and a trained panel evaluated salty intensity. The seasoning in addition to potassium contain significant levels of dietary fiber and phenolic compounds, which also makes it interesting as a functional ingredient. Both burgers and crackers made with the seasoning showed better taste that those without salt. Obviously, they showed lower sodium content than normal formulation, and were richer in potassium, antioxidant and fiber. Then, they showed lower values of the relation Na/K. All these facts are correlated with more ‘healthy’ products especially to that people with hypertension and other coronary dysfunctions.

Keywords: healthy foods, low salt, seasoning, wine pomace

Procedia PDF Downloads 251
64 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice

Authors: Anastasia Schnitzer, Oliver Rehren

Abstract:

Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.

Keywords: prejudice, quality of contact, parasocial contact, narrative framework

Procedia PDF Downloads 62
63 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 107
62 Social and Economic Aspects of Unlikely but Still Possible Welfare to Work Transitions from Long-Term Unemployed

Authors: Andreas Hirseland, Lukas Kerschbaumer

Abstract:

In Germany, during the past years there constantly are about one million long term unemployed who did not benefit from the prospering labor market while most short term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long term unemployment thus turns into a main obstacle to regular employment, especially if accompanied by other impediments such as low level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. Almost two thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Hiring them is often considered as an investment too risky for employers. Therefore formal application schemes based on formal qualification certificates and vocational biographies might reduce employers’ risks but at the same time are not helpful for long-term unemployed and welfare recipients. The panel survey ‘Labor market and social security’ (PASS; ~15,000 respondents in ~10,000 households), carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency), shows that their chance to get back to work tends to fall to nil. Only 66 cases of such unlikely transitions could be observed. In a sequential explanatory mixed-method study, the very scarce ‘success stories’ of unlikely transitions from long term unemployment to work were explored by qualitative inquiry – in-depth interviews with a focus on biography accompanied by qualitative network techniques in order to get a more detailed insight of relevant actors involved in the processes which promote the transition from being a welfare recipient to work. There is strong evidence that sustainable transitions are influenced by biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation.

Keywords: against-all-odds, economic sociology, long-term unemployment, mixed-methods

Procedia PDF Downloads 214
61 Trafficking of Women and Children and Solutions to Combat It: The Case of Nigeria

Authors: Olatokunbo Yakeem

Abstract:

Human trafficking is a crime against gross violations of human rights. Trafficking in persons is a severe socio-economic dilemma that affects the national and international dimensions. Human trafficking or modern-day-slavery emanated from slavery, and it has been in existence before the 6ᵗʰ century. Today, no country is exempted from dehumanizing human beings, and as a result, it has been an international issue. The United Nations (UN) presented the International Protocol to fight human trafficking worldwide, which brought about the international definition of human trafficking. The protocol is to prevent, suppress, and punish trafficking in persons, especially women and children. The trafficking protocol has a link with transnational organised crime rather than migration. Over a hundred and fifty countries nationwide have enacted their criminal and panel code trafficking legislation from the UN trafficking protocol. Sex trafficking is the most common type of exploitation of women and children. Other forms of this crime involve exploiting vulnerable victims through forced labour, child involvement in warfare, domestic servitude, debt bondage, and organ removal for transplantation. Trafficking of women and children into sexual exploitation represents the highest form of human trafficking than other types of exploitation. Trafficking of women and children can either happen internally or across the border. It affects all kinds of people, regardless of their race, social class, culture, religion, and education levels. However, it is more of a gender-based issue against females. Furthermore, human trafficking can lead to life-threatening infections, mental disorders, lifetime trauma, and even the victim's death. The study's significance is to explore why the root causes of women and children trafficking in Nigeria are based around poverty, entrusting children in the hands of relatives and friends, corruption, globalization, weak legislation, and ignorance. The importance of this study is to establish how the national, regional, and international organisations are using the 3P’s Protection, Prevention, and Prosecution) to tackle human trafficking. The methodology approach for this study will be a qualitative paradigm. The rationale behind this selection is that the qualitative method will identify the phenomenon and interpret the findings comprehensively. The data collection will take the form of semi-structured in-depth interviews through telephone and email. The researcher will use a descriptive thematic analysis to analyse the data by using complete coding. In summary, this study aims to recommend to the Nigerian federal government to include human trafficking as a subject in their educational curriculum for early intervention to prevent children from been coerced by criminal gangs. And the research aims to find the root causes of women and children trafficking. Also, to look into the effectiveness of the strategies in place to eradicate human trafficking globally. In the same vein, the research objective is to investigate how the anti-trafficking bodies such as law enforcement and NGOs collaborate to tackle the upsurge in human trafficking.

Keywords: children, Nigeria, trafficking, women

Procedia PDF Downloads 163
60 Measurement of Influence of the COVID-19 Pandemic on Efficiency of Japan’s Railway Companies

Authors: Hideaki Endo, Mika Goto

Abstract:

The global outbreak of the COVID-19 pandemic has seriously affected railway businesses. The number of railway passengers decreased due to the decline in the number of commuters and business travelers to avoid crowded trains and a sharp drop in inbound tourists visiting Japan. This has affected not only railway businesses but also related businesses, including hotels, leisure businesses, and retail businesses at station buildings. In 2021, the companies were divided into profitable and loss-making companies. This division suggests that railway companies, particularly loss-making companies, needed to decrease operational inefficiency. To measure the impact of COVID-19 and discuss the sustainable management strategies of railway companies, we examine the cost inefficiency of Japanese listed railway companies by applying stochastic frontier analysis (SFA) to their operational and financial data. First, we employ the stochastic frontier cost function approach to measure inefficiency. The cost frontier function is formulated as a Cobb–Douglas type, and we estimated parameters and variables for inefficiency. This study uses panel data comprising 26 Japanese-listed railway companies from 2005 to 2020. This period includes several events deteriorating the business environment, such as the financial crisis from 2007 to 2008 and the Great East Japan Earthquake of 2011, and we compare those impacts with those of the COVID-19 pandemic after 2020. Second, we identify the characteristics of the best-practice railway companies and examine the drivers of cost inefficiencies. Third, we analyze the factors influencing cost inefficiency by comparing the profiles of the top 10 railway companies and others before and during the pandemic. Finally, we examine the relationship between cost inefficiency and the implementation of efficiency measures for each railway company. We obtained the following four findings. First, most Japanese railway companies showed the lowest cost inefficiency (most efficient) in 2014 and the highest in 2020 (least efficient) during the COVID-19 pandemic. The second worst occurred in 2009 when it was affected by the financial crisis. However, we did not observe a significant impact of the 2011 Great East Japan Earthquake. This is because no railway company was influenced by the earthquake in this operating area, except for JR-EAST. Second, the best-practice railway companies are KEIO and TOKYU. The main reason for their good performance is that both operate in and near the Tokyo metropolitan area, which is densely populated. Third, we found that non-best-practice companies had a larger decrease in passenger kilometers than best-practice companies. This indicates that passengers made fewer long-distance trips because they refrained from inter-prefectural travel during the pandemic. Finally, we found that companies that implement more efficiency improvement measures had higher cost efficiency and they effectively used their customer databases through proactive DX investments in marketing and asset management.

Keywords: COVID-19 pandemic, stochastic frontier analysis, railway sector, cost efficiency

Procedia PDF Downloads 36
59 Gender Quotas in Italy: Effects on Corporate Performance

Authors: G. Bruno, A. Ciavarella, N. Linciano

Abstract:

The proportion of women in boardroom has traditionally been low around the world. Over the last decades, several jurisdictions opted for active intervention, which triggered a tangible progress in female representation. In Europe, many countries have implemented boardroom diversity policies in the form of legal quotas (Norway, Italy, France, Germany) or governance code amendments (United Kingdom, Finland). Policy actions rest, among other things, on the assumption that gender balanced boards result in improved corporate governance and performance. The investigation of the relationship between female boardroom representation and firm value is therefore key on policy grounds. The evidence gathered so far, however, has not produced conclusive results also because empirical studies on the impact of voluntary female board representation had to tackle with endogeneity, due to either differences in unobservable characteristics across firms that may affect their gender policies and governance choices, or potential reverse causality. In this paper, we study the relationship between the presence of female directors and corporate performance in Italy, where the Law 120/2011 envisaging mandatory quotas has introduced an exogenous shock in board composition which may enable to overcome reverse causality. Our sample comprises Italian firms listed on the Italian Stock Exchange and the members of their board of directors over the period 2008-2016. The study relies on two different databases, both drawn from CONSOB, referring respectively to directors and companies’ characteristics. On methodological grounds, information on directors is treated at the individual level, by matching each company with its directors every year. This allows identifying all time-invariant, possibly correlated, elements of latent heterogeneity that vary across firms and board members, such as the firm immaterial assets and the directors’ skills and commitment. Moreover, we estimate dynamic panel data specifications, so accommodating non-instantaneous adjustments of firm performance and gender diversity to institutional and economic changes. In all cases, robust inference is carried out taking into account the bidimensional clustering of observations over companies and over directors. The study shows the existence of a U-shaped impact of the percentage of women in the boardroom on profitability, as measured by Return On Equity (ROE) and Return On Assets. Female representation yields a positive impact when it exceeds a certain threshold, ranging between about 18% and 21% of the board members, depending on the specification. Given the average board size, i.e., around ten members over the time period considered, this would imply that a significant effect of gender diversity on corporate performance starts to emerge when at least two women hold a seat. This evidence supports the idea underpinning the critical mass theory, i.e., the hypothesis that women may influence.

Keywords: gender diversity, quotas, firms performance, corporate governance

Procedia PDF Downloads 146
58 The Readaptation of the Subscale 3 of the NLit-IT (Nutrition Literacy Assessment Instrument for Italian Subjects)

Authors: Virginia Vettori, Chiara Lorini, Vieri Lastrucci, Giulia Di Pisa, Alessia De Blasi, Sara Giuggioli, Guglielmo Bonaccorsi

Abstract:

The design of the Nutrition Literacy Assessment Instrument (NLit) responds to the need to provide a tool to adequately assess the construct of nutrition literacy (NL), which is strictly connected to the quality of the diet and nutritional health status. The NLit was originally developed and validated in the US context, and it was recently validated for Italian people too (NLit-IT), involving a sample of N = 74 adults. The results of the cross-cultural adaptation of the tool confirmed its validity since it was established that the level of NL contributed to predicting the level of adherence to the Mediterranean Diet (convergent validity). Additionally, results obtained proved that Internal Consistency and reliability of the NLit-IT were good (Cronbach’s alpha (ρT) = 0.78; 95% CI, 0.69–0.84; Intraclass Correlation Coefficient (ICC) = 0.68, 95% CI, 0.46–0.85). However, the Subscale 3 of the NLit-IT “Household Food Measurement” showed lower values of ρT and ICC (ρT = 0.27; 95% CI, 0.1–0.55; ICC = 0.19, 95% CI, 0.01–0.63) than the entire instrument. Subscale 3 includes nine items which are constituted by written questions and the corresponding pictures of the meals. In particular, items 2, 3, and 8 of Subscale 3 had the lowest level of correct answers. The purpose of the present study was to identify the factors that influenced the Internal Consistency and reliability of Subscale 3 of NLit-IT using the methodology of a focus group. A panel of seven experts was formed, involving professionals in the field of public health nutrition, dietetics, and health promotion and all of them were trained on the concepts of nutrition literacy and food appearance. A member of the group drove the discussion, which was oriented in the identification of the reasons for the low levels of reliability and Internal Consistency. The members of the group discussed the level of comprehension of the items and how they could be readapted. From the discussion, it emerges that the written questions were clear and easy to understand, but it was observed that the representations of the meal needed to be improved. Firstly, it has been decided to introduce a fork or a spoon as a reference dimension to better understand the dimension of the food portion (items 1, 4 and 8). Additionally, the flat plate of items 3 and 5 should be substituted with a soup plate because, in the Italian national context, it is common to eat pasta or rice on this kind of plate. Secondly, specific measures should be considered for some kind of foods such as the brick of yogurt instead of a cup of yogurt (items 1 and 4). Lastly, it has been decided to redo the photos of the meals basing on professional photographic techniques. In conclusion, we noted that the graphical representation of the items strictly influenced the level of participants’ comprehension of the questions; moreover, the research group agreed that the level of knowledge about nutrition and food portion size is low in the general population.

Keywords: nutritional literacy, cross cultural adaptation, misinformation, food design

Procedia PDF Downloads 139
57 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 404
56 Inhibition of Influenza Replication through the Restrictive Factors Modulation by CCR5 and CXCR4 Receptor Ligands

Authors: Thauane Silva, Gabrielle do Vale, Andre Ferreira, Marilda Siqueira, Thiago Moreno L. Souza, Milene D. Miranda

Abstract:

The exposure of A(H1N1)pdm09-infected epithelial cells (HeLa) to HIV-1 viral particles, or its gp120, enhanced interferon-induced transmembrane protein (IFITM3) content, a viral restriction factor (RF), resulting in a decrease in influenza replication. The gp120 binds to CCR5 (R5) or CXCR4 (X4) cell receptors during HIV-1 infection. Then, it is possible that the endogenous ligands of these receptors also modulate the expression of IFITM3 and other cellular factors that restrict influenza virus replication. Thus, the aim of this study is to analyze the role of cellular receptors R5 and X4 in modulating RFs in order to inhibit the replication of the influenza virus. A549 cells were treated with 2x effective dose (ED50) of endogenous R5 or X4 receptor agonists, CCL3 (20 ng/ml), CCL4 (10 ng/ml), CCL5 (10 ng/ml) and CXCL12 (100 ng/mL) or exogenous agonists, gp120 Bal-R5, gp120 IIIB-X4 and its mutants (5 µg/mL). The interferon α (10 ng/mL) and oseltamivir (60 nM) were used as a control. After 24 h post agonists exposure, the cells were infected with virus influenza A(H3N2) at 2 MOI (multiplicity of infection) for 1 h. Then, 24 h post infection, the supernatant was harvested and, the viral titre was evaluated by qRT-PCR. To evaluate IFITM3 and SAM and HD domain containing deoxynucleoside triphosphate triphosphohydrolase 1 (SAMHD1) protein levels, A549 were exposed to agonists for 24 h, and the monolayer was lysed with Laemmli buffer for western blot (WB) assay or fixed for indirect immunofluorescence (IFI) assay. In addition to this, we analyzed other RFs modulation in A549, after 24 h post agonists exposure by customized RT² Profiler Polymerase Chain Reaction Array. We also performed a functional assay in which SAMHD1-knocked-down, by single-stranded RNA (siRNA), A549 cells were infected with A(H3N2). In addition, the cells were treated with guanosine to assess the regulatory role of dNTPs by SAMHD1. We found that R5 and X4 agonists inhibited influenza replication in 54 ± 9%. We observed a four-fold increase in SAMHD1 transcripts by RFs mRNA quantification panel. After 24 h post agonists exposure, we did not observe an increase in IFITM3 protein levels through WB or IFI assays, but we observed an upregulation up to three-fold in the protein content of SAMHD1, in A549 exposed to agonists. Besides this, influenza replication enhanced in 20% in cell cultures that SAMDH1 was knockdown. Guanosine treatment in cells exposed to R5 ligands further inhibited influenza virus replication, suggesting that the inhibitory mechanism may involve the activation of the SAMHD1 deoxynucleotide triphosphohydrolase activity. Thus, our data show for the first time a direct relationship of SAMHD1 and inhibition of influenza replication, and provides perspectives for new studies on the signaling modulation, through cellular receptors, to induce proteins of great importance in the control of relevant infections for public health.

Keywords: chemokine receptors, gp120, influenza, virus restriction factors

Procedia PDF Downloads 106
55 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall

Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono

Abstract:

Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.

Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall

Procedia PDF Downloads 168
54 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 34
53 Climate Change Law and Transnational Corporations

Authors: Manuel Jose Oyson

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.

Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations

Procedia PDF Downloads 326
52 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 108
51 Impact of Ethiopia's Productive Safety Net Program on Household Dietary Diversity and Child Nutrition in Rural Ethiopia

Authors: Tagel Gebrehiwot, Carolina Castilla

Abstract:

Food insecurity and child malnutrition are among the most critical issues in Ethiopia. Accordingly, different reform programs have been carried to improve household food security. The Food Security Program (FSP) (among others) was introduced to combat the persistent food insecurity problem in the country. The FSP combines a safety net component called the Productive Safety Net Program (PSNP) started in 2005. The goal of PSNP is to offer multi-annual transfers, such as food, cash or a combination of both to chronically food insecure households to break the cycle of food aid. Food or cash transfers are the main elements of PSNP. The case for cash transfers builds on the Sen’s analysis of ‘entitlement to food’, where he argues that restoring access to food by improving demand is a more effective and sustainable response to food insecurity than food aid. Cash-based schemes offer a greater choice of use of the transfer and can allow a greater diversity of food choice. It has been proven that dietary diversity is positively associated with the key pillars of food security. Thus, dietary diversity is considered as a measure of household’s capacity to access a variety of food groups. Studies of dietary diversity among Ethiopian rural households are somewhat rare and there is still a dearth of evidence on the impact of PSNP on household dietary diversity. In this paper, we examine the impact of the Ethiopia’s PSNP on household dietary diversity and child nutrition using panel household surveys. We employed different methodologies for identification. We exploit the exogenous increase in kebeles’ PSNP budget to identify the effect of the change in the amount of money households received in transfers between 2012 and 2014 on the change in dietary diversity. We use three different approaches to identify this effect: two-stage least squares, reduced form IV, and generalized propensity score matching using a continuous treatment. The results indicate the increase in PSNP transfers between 2012 and 2014 had no effect on household dietary diversity. Estimates for different household dietary indicators reveal that the effect of the change in the cash transfer received by the household is statistically and economically insignificant. This finding is robust to different identification strategies and the inclusion of control variables that determine eligibility to become a PSNP beneficiary. To identify the effect of PSNP participation on children height-for-age and stunting we use a difference-in-difference approach. We use children between 2 and 5 in 2012 as a baseline because by then they have achieved long-term failure to grow. The treatment group comprises children ages 2 to 5 in 2014 in PSNP participant households. While changes in height-for-age take time, two years of additional transfers among children who were not born or under the age of 2-3 in 2012 have the potential to make a considerable impact on reducing the prevalence of stunting. The results indicate that participation in PSNP had no effect on child nutrition measured as height-for-age or probability of beings stunted, suggesting that PSNP should be designed in a more nutrition-sensitive way.

Keywords: continuous treatment, dietary diversity, impact, nutrition security

Procedia PDF Downloads 303
50 Defining the Tipping Point of Tolerance to CO₂-Induced Ocean Acidification in Larval Dusky Kob Argyrosomus japonicus (Pisces: Sciaenidae)

Authors: Pule P. Mpopetsi, Warren M. Potts, Nicola James, Amber Childs

Abstract:

Increased CO₂ production and the consequent ocean acidification (OA) have been identified as one of the greatest threats to both calcifying and non-calcifying marine organisms. Traditionally, marine fishes, as non-calcifying organisms, were considered to have a higher tolerance to near-future OA conditions owing to their well-developed ion regulatory mechanisms. However, recent studies provide evidence to suggest that they may not be as resilient to near-future OA conditions as previously thought. In addition, earlier life stages of marine fishes are thought to be less tolerant than juveniles and adults of the same species as they lack well-developed ion regulatory mechanisms for maintaining homeostasis. This study focused on the effects of near-future OA on larval Argyrosomus japonicus, an estuarine-dependent marine fish species, in order to identify the tipping point of tolerance for the larvae of this species. Larval A. japonicus in the present study were reared from the egg up to 22 days after hatching (DAH) under three treatments. The three treatments, (pCO₂ 353 µatm; pH 8.03), (pCO₂ 451 µatm; pH 7.93) and (pCO₂ 602 µatm; pH 7.83) corresponded to levels predicted to occur in year 2050, 2068 and 2090 respectively under the Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (IPCC RCP) 8.5 model. Size-at-hatch, growth, development, and metabolic responses (standard and active metabolic rates and metabolic scope) were assessed and compared between the three treatments throughout the rearing period. Five earlier larval life stages (hatchling – flexion/post-flexion) were identified by the end of the experiment. There were no significant differences in size-at-hatch (p > 0.05), development or the active metabolic (p > 0.05) or metabolic scope (p > 0.05) of fish in the three treatments throughout the study. However, the standard metabolic rate was significantly higher in the year 2068 treatment but only at the flexion/post-flexion stage which could be attributed to differences in developmental rates (including the development of the gills) between the 2068 and the other two treatments. Overall, the metabolic scope was narrowest in the 2090 treatment but varied according to life stage. Although not significantly different, metabolic scope in the 2090 treatment was noticeably lower at the flexion stage compared to the other two treatments, and the development appeared slower, suggesting that this could be the stage most prone to OA. The study concluded that, in isolation, OA levels predicted to occur between 2050 and 2090 will not negatively affect size-at-hatch, growth, development, and metabolic responses of larval A. japonicus up to 22 DAH (flexion/post-flexion stage). The present study also identified the tipping point of tolerance (where negative impacts will begin) in larvae of the species to be between the years 2090 and 2100.

Keywords: climate change, ecology, marine, ocean acidification

Procedia PDF Downloads 113
49 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.

Keywords: corporate governance, ERM, risk governance, risk management

Procedia PDF Downloads 227
48 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 267
47 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 48
46 Understanding the Role of Social Entrepreneurship in Building Mobility of a Service Transportation Models

Authors: Liam Fassam, Pouria Liravi, Jacquie Bridgman

Abstract:

Introduction: The way we travel is rapidly changing, car ownership and use are declining among young people and those residents in urban areas. Also, the increasing role and popularity of sharing economy companies like Uber highlight a movement towards consuming transportation solutions as a service [Mobility of a Service]. This research looks to bridge the knowledge gap that exists between city mobility, smart cities, sharing economy and social entrepreneurship business models. Understanding of this subject is crucial for smart city design, as access to affordable transport has been identified as a contributing factor to social isolation leading to issues around health and wellbeing. Methodology: To explore the current fit vis-a-vis transportation business models and social impact this research undertook a comparative analysis between a systematic literature review and a Delphi study. The systematic literature review was undertaken to gain an appreciation of the current academic thinking on ‘social entrepreneurship and smart city mobility’. The second phase of the research initiated a Delphi study across a group of 22 participants to review future opinion on ‘how social entrepreneurship can assist city mobility sharing models?’. The Delphi delivered an initial 220 results, which once cross-checked for duplication resulted in 130. These 130 answers were sent back to participants to score importance against a 5-point LIKERT scale, enabling a top 10 listing of areas for shared user transports in society to be gleaned. One further round (4) identified no change in the coefficient of variant thus no further rounds were required. Findings: Initial results of the literature review returned 1,021 journals using the search criteria ‘social entrepreneurship and smart city mobility’. Filtering allied to ‘peer review’, ‘date’, ‘region’ and ‘Chartered associated of business school’ ranking proffered a resultant journal list of 75. Of these, 58 focused on smart city design, 9 on social enterprise in cityscapes, 6 relating to smart city network design and 3 on social impact, with no journals purporting the need for social entrepreneurship to be allied to city mobility. The future inclusion factors from the Delphi expert panel indicated that smart cities needed to include shared economy models in their strategies. Furthermore, social isolation born by costs of infrastructure needed addressing through holistic A-political social enterprise models, and a better understanding of social benefit measurement is needed. Conclusion: In investigating the collaboration between key public transportation stakeholders, a theoretical model of social enterprise transportation models that positively impact upon the smart city needs of reduced transport poverty and social isolation was formed. As such, the research has identified how a revised business model of Mobility of a Service allied to a social entrepreneurship can deliver impactful measured social benefits associated to smart city design existent research.

Keywords: social enterprise, collaborative transportation, new models of ownership, transport social impact

Procedia PDF Downloads 121
45 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 37
44 Ahmad Sabzi Balkhkanloo, Motahareh Sadat Hashemi, Seyede Marzieh Hosseini, Saeedeh Shojaee-Aliabadi, Leila Mirmoghtadaie

Authors: Elyria Kemp, Kelly Cowart, My Bui

Abstract:

According to the National Institute of Mental Health, an estimated 31.9% of adolescents have had an anxiety disorder. Several environmental factors may help to contribute to high levels of anxiety and depression in young people (i.e., Generation Z, Millennials). However, as young people negotiate life on social media, they may begin to evaluate themselves using excessively high standards and adopt self-perfectionism tendencies. Broadly defined, self-perfectionism involves very critical evaluations of the self. Perfectionism may also come from others and may manifest as socially prescribed perfectionism, and young adults are reporting higher levels of socially prescribed perfectionism than previous generations. This rising perfectionism is also associated with anxiety, greater physiological reactivity, and a sense of social disconnection. However, theories from psychology suggest that improvement in emotion regulation can contribute to enhanced psychological and emotional well-being. Emotion regulation refers to the ways people manage how and when they experience and express their emotions. Cognitive reappraisal and expressive suppression are common emotion regulation strategies. Cognitive reappraisal involves changing the meaning of a stimulus that involves construing a potentially emotion-eliciting situation in a way that changes its emotional impact. By contrast, expressive suppression involves inhibiting the behavioral expression of emotion. The purpose of this research is to examine the efficacy of social marketing initiatives which promote emotion regulation strategies to help young adults regulate their emotions. In Study 1 a single factor (emotional regulation strategy: a cognitive reappraisal, expressive, control) between-subjects design was conducted using an online, non-student consumer panel (n=96). Sixty-eight percent of participants were male, and 32% were female. Study participants belonged to the Millennial and Gen Z cohort, ranging in age from 22 to 35 (M=27). Participants were first told to spend at least three minutes writing about a public speaking appearance which made them anxious. The purpose of this exercise was to induce anxiety. Next, participants viewed one of three advertisements (randomly assigned) which promoted an emotion regulation strategy—cognitive reappraisal, expressive suppression, or an advertisement non-emotional in nature. After being exposed to one of the ads, participants responded to a measure composed of two items to access their emotional state and the efficacy of the messages in fostering emotion management. Findings indicated that individuals in the cognitive reappraisal condition (M=3.91) exhibited the most positive feelings and more effective emotion regulation than the expressive suppression (M=3.39) and control conditions (M=3.72, F(1,92) = 3.3, p<.05). Results from this research can be used by institutions (e.g., schools) in taking a leadership role in attacking anxiety and other mental health issues. Social stigmas regarding mental health can be removed and a more proactive stance can be taken in promoting healthy coping behaviors and strategies to manage negative emotions.

Keywords: emotion regulation, anxiety, social marketing, generation z

Procedia PDF Downloads 180
43 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 116
42 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine

Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger

Abstract:

Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.

Keywords: duration, energy-saving, standard programmes, washing temperature

Procedia PDF Downloads 201
41 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel

Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer

Abstract:

Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.

Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber

Procedia PDF Downloads 98
40 Climate Change and Rural-Urban Migration in Brazilian Semiarid Region

Authors: Linda Márcia Mendes Delazeri, Dênis Antônio Da Cunha

Abstract:

Over the past few years, the evidence that human activities have altered the concentration of greenhouse gases in the atmosphere have become stronger, indicating that this accumulation is the most likely cause of climate change observed so far. The risks associated with climate change, although uncertain, have the potential to increase social vulnerability, exacerbating existing socioeconomic challenges. Developing countries are potentially the most affected by climate change, since they have less potential to adapt and are those most dependent on agricultural activities, one of the sectors in which the major negative impacts are expected. In Brazil, specifically, it is expected that the localities which form the semiarid region are among the most affected, due to existing irregularity in rainfall and high temperatures, in addition to economic and social factors endemic to the region. Given the strategic limitations to handle the environmental shocks caused by climate change, an alternative adopted in response to these shocks is migration. Understanding the specific features of migration flows, such as duration, destination and composition is essential to understand the impacts of migration on origin and destination locations and to develop appropriate policies. Thus, this study aims to examine whether climatic factors have contributed to rural-urban migration in semiarid municipalities in the recent past and how these migration flows will be affected by future scenarios of climate change. The study was based on microeconomic theory of utility maximization, in which, to decide to leave the countryside and move on to the urban area, the individual seeks to maximize its utility. Analytically, we estimated an econometric model using the modeling of Fixed Effects and the results confirmed the expectation that climate drivers are crucial for the occurrence of the rural-urban migration. Also, other drivers of the migration process, as economic, social and demographic factors were also important. Additionally, predictions about the rural-urban migration motivated by variations in temperature and precipitation in the climate change scenarios RCP 4.5 and 8.5 were made for the periods 2016-2035 and 2046-2065, defined by the Intergovernmental Panel on Climate Change (IPCC). The results indicate that there will be increased rural-urban migration in the semiarid region in both scenarios and in both periods. In general, the results of this study reinforce the need for formulations of public policies to avoid migration for climatic reasons, such as policies that give support to the productive activities generating income in rural areas. By providing greater incentives for family agriculture and expanding sources of credit for the farmer, it will have a better position to face climate adversities and to settle in rural areas. Ultimately, if migration becomes necessary, there must be the adoption of policies that seek an organized and planned development of urban areas, considering migration as an adaptation strategy to adverse climate effects. Thus, policies that act to absorb migrants in urban areas and ensure that they have access to basic services offered to the urban population would contribute to the social costs reduction of climate variability.

Keywords: climate change, migration, rural productivity, semiarid region

Procedia PDF Downloads 320