Search results for: affective mechanisms index
593 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 75592 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending
Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel
Abstract:
Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear
Procedia PDF Downloads 111591 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India
Authors: Avadh Bihari
Abstract:
Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations
Procedia PDF Downloads 114590 Ascidian Styela rustica Proteins’ Structural Domains Predicted to Participate in the Tunic Formation
Authors: M. I. Tyletc, O. I. Podgornya, T. G. Shaposhnikova, S. V. Shabelnikov, A. G. Mittenberg, M. A. Daugavet
Abstract:
Ascidiacea is the most numerous class of the Tunicata subtype. These chordates' distinctive feature of the anatomical structure is a tunic consisting of cellulose fibrils, protein molecules, and single cells. The mechanisms of the tunic formation are not known in detail; tunic formation could be used as the model system for studying the interaction of cells with the extracellular matrix. Our model species is the ascidian Styela rustica, which is prevalent in benthic communities of the White Sea. As previously shown, the tunic formation involves morula blood cells, which contain the major 48 kDa protein p48. P48 participation in the tunic formation was proved using antibodies against the protein. The nature of the protein and its function remains unknown. The current research aims to determine the amino acid sequence of p48, as well as to clarify its role in the tunic formation. The peptides that make up the p48 amino acid sequence were determined by mass spectrometry. A search for peptides in protein sequence databases identified sequences homologous to p48 in Styela clava, Styela plicata, and Styela canopus. Based on sequence alignment, their level of similarity was determined as 81-87%. The correspondent sequence of ascidian Styela canopus was used for further analysis. The Styela rustica p48 sequence begins with a signal peptide, which could indicate that the protein is secretory. This is consistent with experimentally obtained data: the contents of morula cells secreted in the tunic matrix. The isoelectric point of p48 is 9.77, which is consistent with the experimental results of acid electrophoresis of morula cell proteins. However, the molecular weight of the amino acid sequence of ascidian Styela canopus is 103 kDa, so p48 of Styela rustica is a shorter homolog. The search for conservative functional domains revealed the presence of two Ca-binding EGF-like domains, thrombospondin (TSP1) and tyrosinase domains. The p48 peptides determined by mass spectrometry fall into the region of the sequence corresponding to the last two domains and have amino acid substitutions as compared to Styela canopus homolog. The tyrosinase domain (pfam00264) is known to be part of the phenoloxidase enzyme, which participates in melanization processes and the immune response. The thrombospondin domain (smart00209) interacts with a wide range of proteins, and is involved in several biological processes, including coagulation, cell adhesion, modulation of intercellular and cell-matrix interactions, angiogenesis, wound healing and tissue remodeling. It can be assumed that the tyrosinase domain in p48 plays the role of the phenoloxidase enzyme, and TSP1 provides a link between the extracellular matrix and cell surface receptors, and may also be responsible for the repair of the tunic. The results obtained are consistent with experimental data on p48. The domain organization of protein suggests that p48 is an enzyme involved in the tunic tunning and is an important regulator of the organization of the extracellular matrix.Keywords: ascidian, p48, thrombospondin, tyrosinase, tunic, tunning
Procedia PDF Downloads 115589 Enhancing Food Quality and Safety Management in Ethiopia's Food Processing Industry: Challenges, Causes, and Solutions
Authors: Tuji Jemal Ahmed
Abstract:
Food quality and safety challenges are prevalent in Ethiopia's food processing industry, which can have adverse effects on consumers' health and wellbeing. The country is known for its diverse range of agricultural products, which are essential to its economy. However, poor food quality and safety policies and management systems in the food processing industry have led to several health problems, foodborne illnesses, and economic losses. This paper aims to highlight the causes and effects of food safety and quality issues in the food processing industry of Ethiopia and discuss potential solutions to address these issues. One of the main causes of poor food quality and safety in Ethiopia's food processing industry is the lack of adequate regulations and enforcement mechanisms. The absence of comprehensive food safety and quality policies and guidelines has led to substandard practices in the food manufacturing process. Moreover, the lack of monitoring and enforcement of existing regulations has created a conducive environment for unscrupulous businesses to engage in unsafe practices that endanger the public's health. The effects of poor food quality and safety are significant, ranging from the loss of human lives, increased healthcare costs, and loss of consumer confidence in the food processing industry. Foodborne illnesses, such as diarrhea, typhoid fever, and cholera, are prevalent in Ethiopia, and poor food quality and safety practices contribute significantly to their prevalence. Additionally, food recalls due to contamination or mislabeling often result in significant economic losses for businesses in the food processing industry. To address these challenges, the Ethiopian government has begun to take steps to improve food quality and safety in the food processing industry. One of the most notable initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to regulate and monitor the quality and safety of food and drug products in the country. The EFDA has implemented several measures to enhance food safety, such as conducting routine inspections, monitoring the importation of food products, and enforcing strict labeling requirements. Another potential solution to improve food quality and safety in Ethiopia's food processing industry is the implementation of food safety management systems (FSMS). An FSMS is a set of procedures and policies designed to identify, assess, and control food safety hazards throughout the food manufacturing process. Implementing an FSMS can help businesses in the food processing industry identify and address potential hazards before they cause harm to consumers. Additionally, the implementation of an FSMS can help businesses comply with existing food safety regulations and guidelines. In conclusion, improving food quality and safety policies and management systems in Ethiopia's food processing industry is critical to protecting public health and enhancing the country's economy. Addressing the root causes of poor food quality and safety and implementing effective solutions, such as the establishment of regulatory agencies and the implementation of food safety management systems, can help to improve the overall safety and quality of the country's food supply.Keywords: food quality, food safety, policy, management system, food processing industry
Procedia PDF Downloads 85588 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories
Authors: Mojtaba Taheri, Saied Reza Ameli
Abstract:
In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty
Procedia PDF Downloads 71587 Mindful Self-Compassion Training to Alleviate Work Stress and Fatigue in Community Workers: A Mixed Method Evaluation
Authors: Catherine Begin, Jeanne Berthod, Manon Truchon
Abstract:
In Quebec, there are more than 8,000 community organizations throughout the province, representing more than 72,000 jobs. Working in a community setting involves several particularities (e.g., contact with the suffering of users, feelings of powerlessness, institutional pressure, unstable funding, etc.), which can put workers at risk of fatigue, burnout, and psychological distress. A 2007 study shows that 52% of community workers surveyed have a high psychological distress index. The Ricochet project, founded in 2019, is an initiative aimed at providing various care and services to community workers in the Quebec City region, with a global health approach. Within this program, mindful self-compassion training (MSC) is offered at a low cost. MSC is one of the effective strategies proposed in the literature to help prevent and reduce burnout. Self-compassion is the recognition that suffering, failure, and inadequacies are inherent in the human experience and that everyone, including oneself, deserves compassion. MSC training targets several behavioral, cognitive, and emotional learnings (e.g., motivating oneself with caring, better managing difficult emotions, promoting resilience, etc.). A mixed-method evaluation was conducted with the participants in order to explore the effects of the training on community workers in the Quebec City region. The participants were community workers (management or caregiver). 15 participants completed satisfaction and perceived impact surveys, and 30 participated in structured interviews. Quantitative results showed that participants were generally completely satisfied or satisfied with the training (94%) and perceived that the training allowed them to develop new strategies for dealing with stress (87%). Participants perceived effects on their mood (93%), their contact with others (80%), and their stress level (67%). Some of the barriers raised were scheduling constraints, length of training, and guilt about taking time for oneself. The qualitative results show that individuals experienced long-term benefits, as they were able to apply the tools they received during the training in their daily lives. Some barriers were noted, such as difficulty in getting away from work or problems with the employer, which prevented enrollment. Overall, the results of this evaluation support the use of MSC (mindful self-compassion) training among community workers. Future research could support this evaluation by using a rigorous design and developing innovative ways to overcome the barriers raised.Keywords: mindful self-compassion, community workers, work stres, burnout, wellbeing at work
Procedia PDF Downloads 119586 Qualitative Profiling in Practice: The Italian Public Employment Services Experience
Authors: L. Agneni, F. Carta, C. Micheletta, V. Tersigni
Abstract:
The development of a qualitative method to profile jobseekers is needed to improve the quality of the Public Employment Services (PES) in Italy. This is why the National Agency for Active Labour Market Policies (ANPAL) decided to introduce a Qualitative Profiling Service in the context of the activities carried out by local employment offices’ operators. The qualitative profiling service provides information and data regarding the jobseeker’s personal transition status, through a semi-structured questionnaire administered to PES clients during the guidance interview. The questionnaire responses allow PES staff to identify, for each client, proper activities and policy measures to support jobseekers in their reintegration into the labour market. Data and information gathered by the qualitative profiling tool are the following: frequency, modalities and motivations for clients to apply to local employment offices; clients’ expectations and skills; difficulties that they have faced during the previous working experiences; strategies, actions undertaken and activated channels for job search. These data are used to assess jobseekers’ personal and career characteristics and to measure their employability level (qualitative profiling index), in order to develop and deliver tailor-made action programmes for each client. This paper illustrates the use of the above-mentioned qualitative profiling service on the national territory and provides an overview of the main findings of the survey: concerning the difficulties that unemployed people face in finding a job and their perception of different aspects related to the transition in the labour market. The survey involved over 10.000 jobseekers registered with the PES. Most of them are beneficiaries of the “citizens' income”, a specific active labour policy and social inclusion measure. Furthermore, data analysis allows classifying jobseekers into a specific group of clients with similar features and behaviours, on the basis of socio-demographic variables, customers' expectations, needs and required skills for the profession for which they seek employment. Finally, the survey collects PES staff opinions and comments concerning clients’ difficulties in finding a new job and also their strengths. This is a starting point for PESs’ operators to define adequate strategies to facilitate jobseekers’ access or reintegration into the labour market.Keywords: labour market transition, public employment services, qualitative profiling, vocational guidance
Procedia PDF Downloads 140585 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation
Authors: Panagiotis Svarnas, Polykarpos Papadopoulos
Abstract:
Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force
Procedia PDF Downloads 139584 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 132583 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 185582 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis
Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener
Abstract:
Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR
Procedia PDF Downloads 183581 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253580 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 144579 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 296578 Purification of Bacillus Lipopeptides for Diverse Applications
Authors: Vivek Rangarajan, Kim G. Clarke
Abstract:
Bacillus lipopeptides are biosurfactants with wide ranging applications in the medical, food, agricultural, environmental and cosmetic industries. They are produced as a mix of three families, surfactin, iturin and fengycin, each comprising a large number of homologues of varying functionalities. Consequently, the method and degree of purification of the lipopeptide cocktail becomes particularly important if the functionality of the lipopeptide end-product is to be maximized for the specific application. However, downstream processing of Bacillus lipopeptides is particularly challenging due to the subtle variations observed in the different lipopeptide homologues and isoforms. To date, the most frequently used lipopeptide purification operations have been acid precipitation, solvent extraction, membrane ultrafiltration, adsorption and size exclusion. RP-HPLC (reverse phase high pressure liquid chromatography) also has potential for fractionation of the lipopeptide homologues. In the studies presented here, membrane ultrafiltration and RP-HPLC were evaluated for lipopeptide purification to different degrees of purities for maximum functionality. Batch membrane ultrafiltration using 50 kDa polyether sulphone (PES) membranes resulted in lipopeptide recovery of about 68% for surfactin and 82 % for fengycin. The recovery was further improved to 95% by using size-conditioned lipopeptide micelles. The conditioning of lipopeptides with Ca2+ ions resulted in uniformly sized micelles with average size of 96.4 nm and a polydispersity index of 0.18. The size conditioning also facilitated removal of impurities (molecular weight ranging between 2335-3500 Da) through operation of the system under dia-filtration mode, in a way similar to salt removal from protein by dialysis. The resultant purified lipopeptide was devoid of macromolecular impurities and could ideally suit applications in the cosmetic and food industries. Enhanced purification using RP-HPLC was carried out in an analytical C18 column, with the aim to fractionate lipopeptides into their constituent homologues. The column was eluted with mobile phase comprising acetonitrile and water over an acetonitrile gradient, 35% - 80%, over 70 minutes. The gradient elution program resulted in as many as 41 fractions of individual lipopeptide homologues. The efficacy test of these fractions against fungal phytopathogens showed that first 21 fractions, identified to be homologues of iturins and fengycins, displayed maximum antifungal activities, suitable for biocontrol in the agricultural industry. Thus, in the current study, the downstream processing of lipopeptides leading to tailor-made products for selective applications was demonstrated using two major downstream unit operations.Keywords: bacillus lipopeptides, membrane ultrafiltration, purification, RP-HPLC
Procedia PDF Downloads 205577 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix
Authors: Khodzhaberdi Allaberdiev
Abstract:
In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.Keywords: epoxies, interface, modeling, polyamide fibers
Procedia PDF Downloads 266576 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century
Authors: G. Iacoviello, A. Lazzini
Abstract:
This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.Keywords: education system, government, knowledge, power
Procedia PDF Downloads 139575 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century
Authors: O. R. Razanakoto, L. Temple
Abstract:
Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk
Procedia PDF Downloads 110574 Histological Grade Concordance between Core Needle Biopsy and Corresponding Surgical Specimen in Breast Carcinoma
Authors: J. Szpor, K. Witczak, M. Storman, A. Orchel, D. Hodorowicz-Zaniewska, K. Okoń, A. Klimkowska
Abstract:
Core needle biopsy (CNB) is well established as an important diagnostic tool in diagnosing breast cancer and it is now considered the initial method of choice for diagnosing breast disease. In comparison to fine needle aspiration (FNA), CNB provides more architectural information allowing for the evaluation of prognostic and predictive factors for breast cancer, including histological grade—one of three prognostic factors used to calculate the Nottingham Prognostic Index. Several studies have previously described the concordance rate between CNB and surgical excision specimen in determination of histological grade (HG). The concordance rate previously ascribed to overall grade varies widely across literature, ranging from 59-91%. The aim of this study is to see how the data looks like in material at authors’ institution and are the results as compared to those described in previous literature. The study population included 157 women with a breast tumor who underwent a core needle biopsy for breast carcinoma and a subsequent surgical excision of the tumor. Both materials were evaluated for the determination of histological grade (scale from 1 to 3). HG was assessed only in core needle biopsies containing at least 10 well preserved HPF with invasive tumor. The degree of concordance between CNB and surgical excision specimen for the determination of tumor grade was assessed by Cohen’s kappa coefficient. The level of agreement between core needle biopsy and surgical resection specimen for overall histologic grading was 73% (113 of 155 cases). CNB correctly predicted the grade of the surgical excision specimen in 21 cases for grade 1 tumors (Kappa coefficient κ = 0.525 95% CI (0.3634; 0.6818), 52 cases for grade 2 (Kappa coefficient κ = 0.5652 95% CI (0.458; 0.667) and 40 cases for stage 3 tumors (Kappa coefficient κ = 0.6154 95% CI (0.4862; 0.7309). The highest level of agreement was observed in grade 3 malignancies. In 9 of 42 (21%) discordant cases, the grade was higher in the CNB than in the surgical excision. This composed 6% of the overall discordance. These results correspond to the noted in the literature, showing that underestimation occurs more frequently than overestimation. This study shows that authors’ institution’s histologic grading of CNBs and surgical excisions shows a fairly good correlation and is consistent with findings in previous reports. Despite the inevitable limitations of CNB, CNB is an effective method for diagnosing breast cancer and managing treatment options. Assessment of tumour grade by CNB is useful for the planning of treatment, so in authors’ opinion it is worthy to implement it in daily practice.Keywords: breast cancer, concordance, core needle biopsy, histological grade
Procedia PDF Downloads 229573 Analysis of Scholarly Communication Patterns in Korean Studies
Authors: Erin Hea-Jin Kim
Abstract:
This study aims to investigate scholarly communication patterns in Korean studies, which focuses on all aspects of Korea, including history, culture, literature, politics, society, economics, religion, and so on. It is called ‘national study or home study’ as the subject of the study is itself, whereas it is called ‘area study’ as the subject of the study is others, i.e., outside of Korea. Understanding of the structure of scholarly communication in Korean studies is important since the motivations, procedures, results, or outcomes of individual studies may be affected by the cooperative relationships that appear in the communication structure. To this end, we collected 1,798 articles with the (author or index) keyword ‘Korean’ published in 2018 from the Scopus database and extracted the institution and country of the authors using a text mining technique. A total of 96 countries, including South Korea, was identified. Then we constructed a co-authorship network based on the countries identified. The indicators of social network analysis (SNA), co-occurrences, and cluster analysis were used to measure the activity and connectivity of participation in collaboration in Korean studies. As a result, the highest frequency of collaboration appears in the following order: S. Korea with the United States (603), S. Korea with Japan (146), S. Korea with China (131), S. Korea with the United Kingdom (83), and China with the United States (65). This means that the most active participants are S. Korea as well as the USA. The highest rank in the role of mediator measured by betweenness centrality appears in the following order: United States (0.165), United Kingdom (0.045), China (0.043), Japan (0.037), Australia (0.026), and South Africa (0.023). These results show that these countries contribute to connecting in Korean studies. We found two major communities among the co-authorship network. Asian countries and America belong to the same community, and the United Kingdom and European countries belong to the other community. Korean studies have a long history, and the study has emerged since Japanese colonization. However, Korean studies have never been investigated by digital content analysis. The contributions of this study are an analysis of co-authorship in Korean studies with a global perspective based on digital content, which has not attempted so far to our knowledge, and to suggest ideas on how to analyze the humanities disciplines such as history, literature, or Korean studies by text mining. The limitation of this study is that the scholarly data we collected did not cover all domestic journals because we only gathered scholarly data from Scopus. There are thousands of domestic journals not indexed in Scopus that we can consider in terms of national studies, but are not possible to collect.Keywords: co-authorship network, Korean studies, Koreanology, scholarly communication
Procedia PDF Downloads 158572 Investigation of the Effects of 10-Week Nordic Hamstring Exercise Training and Subsequent Detraining on Plasma Viscosity and Oxidative Stress Levels in Healthy Young Men
Authors: H. C. Ozdamar , O. Kilic-Erkek, H. E. Akkaya, E. Kilic-Toprak, M. Bor-Kucukatay
Abstract:
Nordic hamstring exercise (NHE) is used to increase hamstring muscle strength, prevent injuries. The aim of this study was to reveal the acute, long-term effects of 10-week NHE, followed by 5, 10-week detraining on anthropometric measurements, flexibility, anaerobic power, muscle architecture, damage, fatigue, oxidative stress, plasma viscosity (PV), blood lactate levels. 40 sedentary, healthy male volunteers underwent 10 weeks of progressive NHE followed by 5, 10 weeks of detraining. Muscle architecture was determined by ultrasonography, stiffness by strain elastography. Anaerobic power was assessed by double-foot standing, long jump, vertical jump, flexibility by sit-lie, hamstring flexibility tests. Creatine kinase activity, oxidant/antioxidant parameters were measured from venous blood by a commercial kit, whereas PV was determined using a cone-plate viscometer. The blood lactate level was measured from the fingertip. NHE allowed subjects to lose weight, this effect was reversed by detraining for 5 weeks. Exercise caused an increase in knee angles measured by a goniometer, which wasn’t affected by detraining. 10-week NHE caused a partially reversed increase in anaerobic performance upon detraining. NHE resulted in increment of biceps femoris long head (BFub) area, pennation angle, which was reversed by detraining of 10-weeks. Blood lactate levels, muscle pain, fatigue were increased after each exercise session. NHE didn’t change oxidant/antioxidant parameters; 5-week detraining resulted in an increase in total oxidant capacity (TOC) and oxidative stress index (OSI). Detraining of 10 weeks caused a reduction of these parameters. Acute exercise caused a reduction in PV at 1 to 10 weeks. Pre-exercise PV measured on the 10th week was lower than the basal value. Detraining caused the increment of PV. The results may guide the selection of the exercise type to increase performance and muscle strength. Knowing how much of the gains will be lost after a period of detraining can contribute to raising awareness of the continuity of the exercise. This work was supported by PAU Scientific Research Projects Coordination Unit (Project number: 2018SABE034)Keywords: anaerobic power, detraining, Nordic hamstring exercise, oxidative stress, plasma viscosity
Procedia PDF Downloads 126571 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity
Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova
Abstract:
Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms
Procedia PDF Downloads 71570 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites
Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari
Abstract:
Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm:569 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures
Authors: Sunaitan Al Mutairi
Abstract:
A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays
Procedia PDF Downloads 318568 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23
Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov
Abstract:
We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.Keywords: corona, flares, solar activity, X-ray emission
Procedia PDF Downloads 345567 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction
Authors: Sundaram Chandrasekaran, Seung Hyun Hur
Abstract:
Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting
Procedia PDF Downloads 294566 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation
Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo
Abstract:
Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.Keywords: energy restriction, fat, liver, oxidation, resveratrol
Procedia PDF Downloads 211565 Environmental Resilience in Sustainability Outcomes of Spatial-Economic Model Structure on the Topology of Construction Ecology
Authors: Moustafa Osman Mohammed
Abstract:
The resilient and sustainable of construction ecology is essential to world’s socio-economic development. Environmental resilience is crucial in relating construction ecology to topology of spatial-economic model. Sustainability of spatial-economic model gives attention to green business to comply with Earth’s System for naturally exchange patterns of ecosystems. The systems ecology has consistent and periodic cycles to preserve energy and materials flow in Earth’s System. When model structure is influencing communication of internal and external features in system networks, it postulated the valence of the first-level spatial outcomes (i.e., project compatibility success). These instrumentalities are dependent on second-level outcomes (i.e., participant security satisfaction). These outcomes of model are based on measuring database efficiency, from 2015 to 2025. The model topology has state-of-the-art in value-orientation impact and correspond complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic model; develop a set of sustainability indicators associated with model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate environmental resilience. The model is managing and developing schemes from perspective of multiple sources pollutants through the input–output criteria. These criteria are evaluated the external insertions effects to conduct Monte Carlo simulations and analysis for using matrices in a unique spatial structure. The balance “equilibrium patterns” such as collective biosphere features, has a composite index of the distributed feedback flows. These feedback flows have a dynamic structure with physical and chemical properties for gradual prolong of incremental patterns. While these structures argue from system ecology, static loads are not decisive from an artistic/architectural perspective. The popularity of system resilience, in the systems structure related to ecology has not been achieved without the generation of confusion and vagueness. However, this topic is relevant to forecast future scenarios where industrial regions will need to keep on dealing with the impact of relative environmental deviations. The model attempts to unify analytic and analogical structure of urban environments using database software to integrate sustainability outcomes where the process based on systems topology of construction ecology.Keywords: system ecology, construction ecology, industrial ecology, spatial-economic model, systems topology
Procedia PDF Downloads 19564 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36
Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki
Abstract:
Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance
Procedia PDF Downloads 224