Search results for: constructive quantum field theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12998

Search results for: constructive quantum field theory

1268 Properties and Microstructure of Scaled-Up MgO Concrete Blocks Incorporating Fly Ash or Ground Granulated Blast-Furnace Slag

Authors: L. Pu, C. Unluer

Abstract:

MgO cements have the potential to sequester CO2 in construction products, and can be partial or complete replacement of PC in concrete. Construction block is a promising application for reactive MgO cements. Main advantages of blocks are: (i) suitability for sequestering CO2 due to their initially porous structure; (ii) lack of need for in-situ treatment as carbonation can take place during fabrication; and (iii) high potential for commercialization. Both strength gain and carbon sequestration of MgO cements depend on carbonation process. Fly ash and ground granulated blast-furnace slag (GGBS) are pozzolanic material and are proved to improve many of the performance characteristics of the concrete, such as strength, workability, permeability, durability and corrosion resistance. A very limited amount of work has been reported on the production of MgO blocks on a large scale so far. A much more extensive study, wherein blocks with different mix design is needed to verify the feasibility of commercial production. The changes in the performance of the samples were evaluated by compressive strength testing. The properties of the carbonation products were identified by X-ray diffraction (XRD) and scanning electron microscopy (SEM)/ field emission scanning electron microscopy (FESEM), and the degree of carbonation was obtained by thermogravimetric analysis (TGA), XRD and energy dispersive X-ray (EDX). The results of this study enabled the understanding the relationship between lab-scale samples and scale-up blocks based on their mechanical performance and microstructure. Results indicate that for both scaled-up and lab-scale samples, MgO samples always had the highest strength results, followed by MgO-fly ash samples and MgO-GGBS had relatively lowest strength. The lower strength of MgO with fly ash/GGBS samples at early stage is related to the relatively slow hydration process of pozzolanic materials. Lab-scale cubic samples were observed to have higher strength results than scaled-up samples. The large size of the scaled-up samples made it more difficult to let CO2 to reach inner part of the samples and less carbonation products formed. XRD, TGA and FESEM/EDX results indicate the existence of brucite and HMCs in MgO samples, M-S-H, hydrotalcite in the MgO-fly ash samples and C-S-H, hydrotalctie in the MgO-GGBS samples. Formation of hydration products (M-S-H, C-S-H, hydrotalcite) and carbonation products (hydromagnecite, dypingite) increased with curing duration, which is the reason of increasing strength. This study verifies the advantage of large-scale MgO blocks over common PC blocks and the feasibility of commercial production of MgO blocks.

Keywords: reactive MgO, fly ash, ground granulated blast-furnace slag, carbonation, CO₂

Procedia PDF Downloads 193
1267 Computational and Experimental Study of the Mechanics of Heart Tube Formation in the Chick Embryo

Authors: Hadi S. Hosseini, Larry A. Taber

Abstract:

In the embryo, heart is initially a simple tubular structure that undergoes complex morphological changes as it transforms into a four-chambered pump. This work focuses on mechanisms that create heart tube (HT). The early embryo is composed of three relatively flat primary germ layers called endoderm, mesoderm, and ectoderm. Precardiac cells located within bilateral regions of the mesoderm called heart fields (HFs) fold and fuse along the embryonic midline to create the HT. The right and left halves of this plate fold symmetrically to bring their upper edges into contact along the midline, where they fuse. In a region near the fusion line, these layers then separate to generate the primitive HT and foregut, which then extend vertically. The anterior intestinal portal (AIP) is the opening at the caudal end of the foregut, which descends as the HT lengthens. The biomechanical mechanisms that drive this folding are poorly understood. Our central hypothesis is that folding is caused by differences in growth between the endoderm and mesoderm while subsequent extension is driven by contraction along the AIP. The feasibility of this hypothesis is examined using experiments with chick embryos and finite-element modeling (FEM). Fertilized white Leghorn chicken eggs were incubated for approximately 22-33 hours until appropriate Hamburger and Hamilton stage (HH5 to HH9) was reached. To inhibit contraction, embryos were cultured in media containing blebbistatin (myosin II inhibitor) for 18h. Three-dimensional models were created using ABAQUS (D. S. Simulia). The initial geometry consists of a flat plate including two layers representing the mesoderm and endoderm. Tissue was considered as a nonlinear elastic material with growth and contraction (negative growth) simulated using a theory, in which the total deformation gradient is given by F=F^*.G, where G is growth tensor and F* is the elastic deformation gradient tensor. In embryos exposed to blebbistatin, initial folding and AIP descension occurred normally. However, after HFs partially fused to create the upper part of the HT, fusion, and AIP descension stopped, and the HT failed to grow longer. These results suggest that cytoskeletal contraction is required only for the later stages of HT formation. In the model, a larger biaxial growth rate in the mesoderm compared to the endoderm causes the bilayered plate to bend ventrally, as the upper edge moves toward the midline, where it 'fuses' with the other half . This folding creates the upper section of the HT, as well as the foregut pocket bordered by the AIP. After this phase completes by stage HH7, contraction along the arch-shaped AIP pulls the lower edge of the plate downward, stretching the two layers. Results given by model are in reasonable agreement with experimental data for the shape of HT, as well as patterns of stress and strain. In conclusion, results of our study support our hypothesis for the creation of the heart tube.

Keywords: heart tube formation, FEM, chick embryo, biomechanics

Procedia PDF Downloads 299
1266 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System

Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold

Abstract:

In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.

Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber

Procedia PDF Downloads 149
1265 Impact of Interventions on Brain Functional Connectivity in Young Male Basketball Players: A Comparative Study

Authors: Mohammad Khazaei, Reza Rostami, Hassan Gharayagh Zandi, Ruhollah Basatnia, Mahboubeh Ghayour Najafabadi

Abstract:

Introduction: This study delves into the influence of diverse interventions on brain functional connectivity among young male basketball players. Given the significance of understanding how interventions affect cognitive functions in athletes, particularly in the context of basketball, this research contributes to the growing body of knowledge in sports neuroscience. Methods: Three distinct groups were selected for comprehensive investigation: the Motivational Interview Group, Placebo Consumption Group, and Ritalin Consumption Group. The study involved assessing brain functional connectivity using various frequency bands (Delta, Theta, Alpha, Beta1, Beta2, Gamma, and Total Band) before and after the interventions. The participants were subjected to specific interventions corresponding to their assigned groups. Results: The findings revealed substantial differences in brain functional connectivity across the studied groups. The Motivational Interview Group exhibited optimal outcomes in PLI (Total Band) connectivity. The Placebo Consumption Group demonstrated a marked impact on PLV (Alpha) connectivity, and the Ritalin Consumption Group experienced a considerable enhancement in imCoh (Total Band) connectivity. Discussion: The observed variations in brain functional connectivity underscore the nuanced effects of different interventions on young male basketball players. The enhanced connectivity in specific frequency bands suggests potential cognitive and performance improvements. Notably, the Motivational Interview and Placebo Consumption groups displayed unique patterns, emphasizing the multifaceted nature of interventions. These findings contribute to the understanding of tailored interventions for optimizing cognitive functions in young male basketball players. Conclusion: This study provides valuable insights into the intricate relationship between interventions and brain functional connectivity in young male basketball players. Further research with expanded sample sizes and more sophisticated statistical analyses is recommended to corroborate and expand upon these initial findings. The implications of this study extend to the broader field of sports neuroscience, aiding in the development of targeted interventions for athletes in various disciplines.

Keywords: electroencephalography, Ritalin, Placebo effect, motivational interview

Procedia PDF Downloads 69
1264 Nitrogen Fixation of Soybean Approaches for Enhancing under Saline and Water Stress Conditions

Authors: Ayman El Sabagh, AbdElhamid Omar, Dekoum Assaha, Khair Mohammad Youldash, Akihiro Ueda, Celaleddin Barutçular, Hirofumi Saneoka

Abstract:

Drought and salinity stress are a worldwide problem, constraining global crop production seriously. Hence, soybean is susceptible to yield loss from water deficit and salinity stress. Therefore, different approaches have been suggested to solve these issues. Osmoprotectants play an important role in protection the plants from various environmental stresses. Moreover, organic fertilization has several beneficial effects on agricultural fields. Presently, efforts to maximize nitrogen fixation in soybean are critical because of widespread increase in soil degradation in Egypt. Therefore, a greenhouse research was conducted at plant nutritional physiology laboratory, Hiroshima University, Japan for assessing the impact of exogenous osmoregulators and compost application in alleviating the adverse effects of salinity and water stress on soybean. Treatments was included (i) water stress treatments (different soil moisture levels consisting of (100%, 75%, and 50% of field water holding capacity), (ii) salinity concentrations (0 and 15 mM) were applied in fully developed trifoliolate leaf node (V1), (iii) compost treatments (0 and 24 t ha-1) and (iv) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each, was applied at two growth stages (V1 and R1). The seeds of soybean cultivar Giza 111, was sown into basin from wood (length10 meter, width 50cm, height 50cm and depth 350cm) containing a soil mixture of granite regosol soil and perlite (2:1 v/v). The nitrogen-fixing activity was estimated by using gas chromatography and all measurements were made in three replicates. The results showed that water deficit and salinity stress reduced biological nitrogen fixation and specific nodule activity than normal irrigation conditions. Exogenous osmoprotectants were improved biological nitrogen fixation and specific nodule activity as well as, applying of compost led to improving many of biological nitrogen fixation and specific nodule activity with superiority than stress conditions. The combined application compost fertilizer and exogenous osmoprotectants were more effective in alleviating the adverse effect of stress to improve biological nitrogen fixation and specific nodule activity of Soybean.

Keywords: a biotic stress, biological nitrogen fixation, compost, osmoprotectants, specific nodule activity, soybean

Procedia PDF Downloads 309
1263 Investigating the Impact of Individual Risk-Willingness and Group-Interaction Effects on Business Model Innovation Decisions

Authors: Sarah Müller-Sägebrecht

Abstract:

Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. Individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) Which impact has the individual risk-willingness on BMI decisions? And ii) how do group interaction effects impact BMI decisions? After conducting 26 in-depth interviews with executives from the manufacturing industry, the applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.

Keywords: business model innovation, decision-making, group biases, group decisions, group-interaction effects, risk-willingness

Procedia PDF Downloads 97
1262 Medical Ethics in the Hospital: Towards Quality Ethics Consultation

Authors: Dina Siniora, Jasia Baig

Abstract:

During the past few decades, the healthcare system has undergone profound changes in their healthcare decision-making competencies and moral aptitudes due to the vast advancement in technology, clinical skills, and scientific knowledge. Healthcare decision-making deals with morally contentious dilemmas ranging from illness, life and death judgments that require sensitivity and awareness towards the patient’s preferences while taking into consideration medicine’s abilities and boundaries. As the ever-evolving field of medicine continues to become more scientifically and morally multifarious; physicians and the hospital administrators increasingly rely on ethics committees to resolve problems that arise in everyday patient care. The role and latitude of responsibilities of ethics committees which includes being dispute intermediaries, moral analysts, policy educators, counselors, advocates, and reviewers; suggest the importance and effectiveness of a fully integrated committee. Despite achievements on Integrated Ethics and progress in standards and competencies, there is an imminent necessity for further improvement in quality within ethics consultation services in areas of credentialing, professionalism and standards of quality, as well as the quality of healthcare throughout the system. These concerns can be resolved first by collecting data about particular quality gaps and comprehend the level to which ethics committees are consistent with newly published ASBH quality standards. Policymakers should pursue improvement strategies that target both academic bioethics community and major stakeholders at hospitals, who directly influence ethics committees. This broader approach oriented towards education and intervention outcome in conjunction with preventive ethics to address disparities in quality on a systematic level. Adopting tools for improving competencies and processes within ethics consultation by implementing a credentialing process, upholding normative significance for the ASBH core competencies, advocating for professional Code of Ethics, and further clarifying the internal structures will improve productivity, patient satisfaction, and institutional integrity. This cannot be systemically achieved without a written certification exam for HCEC practitioners, credentialing and privileging HCEC practitioners at the hospital level, and accrediting HCEC services at the institutional level.

Keywords: ethics consultation, hospital, medical ethics, quality

Procedia PDF Downloads 191
1261 Shaping Work Engagement through Intra-Organizational Coopetition: Case Study of the University of Zielona Gora in Poland

Authors: Marta Moczulska

Abstract:

One of the most important aspects of human management in an organization is the work engagement. In spite of the different perspectives of engagement, it is possible to see that it is expressed in the activity of the individual involved in the performance of tasks, the functioning of the organization. At the same time is considered not only in behavioural but also cognitive and emotional dimensions. Previous studies were related to sources, predictors of engagement and determinants, including organizational ones. Attention was paid to the importance of needs (including belonging, success, development, sense of work), values (such as trust, honesty, respect, justice) or interpersonal relationships, especially with the supervisor. Taking them into account and theories related to human acting, behaviour in the organization, interactions, it was recognized that engagement can be shaped through cooperation and competition. It was assumed that to shape the work engagement, it is necessary to simultaneously cooperate and compete in order to reduce the weaknesses of each of these activities and strengthen the strengths. Combining cooperation and competition is defined as 'coopetition'. However, research conducted in this field is primarily concerned with relations between companies. Intra-organizational coopetition is mainly considered as competing organizational branches or units (cross-functional coopetition). Less attention is paid to competing groups or individuals. It is worth noting the ambiguity of the concepts of cooperation and rivalry. Taking into account the terms used and their meaning, different levels of cooperation and forms of competition can be distinguished. Thus, several types of intra-organizational coopetition can be identified. The article aims at defining the potential for work engagement through intra-organizational coopetition. The aim of research was to know how levels of cooperation in competition conditions influence engagement. It is assumed that rivalry (positive competition) between teams (the highest level of cooperation) is a type of coopetition that contributes to working engagement. Qualitative research will be carried out among students of the University of Zielona Gora, realizing various types of projects. The first research groups will be students working in groups on one project for three months. The second research group will be composed of students working in groups on several projects in the same period (three months). Work engagement will be determined using the UWES questionnaire. Levels of cooperation will be determined using the author's research tool. Due to the fact that the research is ongoing, results will be presented in the final paper.

Keywords: competition, cooperation, intra-organizational coopetition, work engagement

Procedia PDF Downloads 146
1260 From Shelf to Shell - The Corporate Form in the Era of Over-Regulation

Authors: Chrysthia Papacleovoulou

Abstract:

The era of de-regulation, off-shore and tax haven jurisdictions, and shelf companies has come to an end. The usage of complex corporate structures involving trust instruments, special purpose vehicles, holding-subsidiaries in offshore haven jurisdictions, and taking advantage of tax treaties is soaring. States which raced to introduce corporate friendly legislation, tax incentives, and creative international trust law in order to attract greater FDI are now faced with regulatory challenges and are forced to revisit the corporate form and its tax treatment. The fiduciary services industry, which dominated over the last 3 decades, is now striving to keep up with the new regulatory framework as a result of a number of European and international legislative measures. This article considers the challenges to the company and the corporate form as a result of the legislative measures on tax planning and tax avoidance, CRS reporting, FATCA, CFC rules, OECD’s BEPS, the EU Commission's new transparency rules for intermediaries that extends to tax advisors, accountants, banks & lawyers who design and promote tax planning schemes for their clients, new EU rules to block artificial tax arrangements and new transparency requirements for financial accounts, tax rulings and multinationals activities (DAC 6), G20's decision for a global 15% minimum corporate tax and banking regulation. As a result, states are found in a race of over-regulation and compliance. These legislative measures constitute a global up-side down tax-harmonisation. Through the adoption of the OECD’s BEPS, states agreed to an international collaboration to end tax avoidance and reform international taxation rules. Whilst the idea was to ensure that multinationals would pay their fair share of tax everywhere they operate, an indirect result of the aforementioned regulatory measures was to attack private clients-individuals who -over the past 3 decades- used the international tax system and jurisdictions such as Marshal Islands, Cayman Islands, British Virgin Islands, Bermuda, Seychelles, St. Vincent, Jersey, Guernsey, Liechtenstein, Monaco, Cyprus, and Malta, to name but a few, to engage in legitimate tax planning and tax avoidance. Companies can no longer maintain bank accounts without satisfying the real substance test. States override the incorporation doctrine theory and apply a real seat or real substance test in taxing companies and their activities, targeting even the beneficial owners personally with tax liability. Tax authorities in civil law jurisdictions lift the corporate veil through the public registries of UBO Registries and Trust Registries. As a result, the corporate form and the doctrine of limited liability are challenged in their core. Lastly, this article identifies the development of new instruments, such as funds and private placement insurance policies, and the trend of digital nomad workers. The baffling question is whether industry and states can meet somewhere in the middle and exit this over-regulation frenzy.

Keywords: company, regulation, TAX, corporate structure, trust vehicles, real seat

Procedia PDF Downloads 140
1259 Educational Path for Pedagogical Skills: A Football School Experience

Authors: A. Giani

Abstract:

The current pedagogical culture recognizes an educational scope within the sports practices. It is widely accepted, in the pedagogical culture, that thanks to the acquisition and development of motor skills, it is also possible to exercise abilities that concern the way of facing and managing the difficulties of everyday life. Sport is a peculiar educational environment: the children have the opportunity to discover the possibilities of their body, to correlate with their peers, and to learn how to manage the rules and the relationship with authorities, such as coaches. Educational aspects of the sport concern both non-formal and formal educational environments. Coaches play a critical role in an agonistic sphere: exactly like the competencies developed by the children, coaches have to work on their skills to properly set up the educational scene. Facing these new educational tasks - which are not new per se, but new because they are brought back to awareness - a few questions arise: does the coach have adequate preparation? Is the training of the coach in this specific area appropriate? This contribution aims to explore the issue in depth by focusing on the reality of the Football School. Starting from a possible sense of pedagogical inadequacy detected during a series of meetings with several football clubs in Piedmont (Italy), there have been highlighted some important educational needs within the professional training of sports coaches. It is indeed necessary for the coach to know the processes underlying the educational relationship in order to better understand the centrality of the assessment during the educational intervention and to be able to manage the asymmetry in the coach-athlete relationship. In order to provide a response to these pedagogical needs, a formative plan has been designed to allow both an in-depth study of educational issues and a correct self-evaluation of certain pedagogical skills’ control levels, led by the coach. This plan has been based on particular practices, the Educational Practices of Pre-test (EPP), a specific version of community practices designed for the extracurricular activities. The above-mentioned practices realized through the use of texts meant as pre-tests, promoted a reflection within the group of coaches: they set up real and plausible sports experiences - in particular football, triggering a reflection about the relationship’s object, spaces, and methods. The characteristic aspect of pre-tests is that it is impossible to anticipate the reflection as it is necessarily connected to the personal experience and sensitivity, requiring a strong interest and involvement by participants: situations must be considered by the coaches as possible settings in which they could be found on the field.

Keywords: relational needs, values, responsibility, self-evaluation

Procedia PDF Downloads 120
1258 Evaluation of Buckwheat Genotypes to Different Planting Geometries and Fertility Levels in Northern Transition Zone of Karnataka

Authors: U. K. Hulihalli, Shantveerayya

Abstract:

Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.

Keywords: buckwheat, planting geometry, genotypes, fertility levels

Procedia PDF Downloads 175
1257 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 254
1256 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 395
1255 Integrated Geophysical Approach for Subsurface Delineation in Srinagar, Uttarakhand, India

Authors: Pradeep Kumar Singh Chauhan, Gayatri Devi, Zamir Ahmad, Komal Chauhan, Abha Mittal

Abstract:

The application of geophysical methods to study the subsurface profile for site investigation is becoming popular globally. These methods are non-destructive and provide the image of subsurface at shallow depths. Seismic refraction method is one of the most common and efficient method being used for civil engineering site investigations particularly for knowing the seismic velocity of the subsurface layers. Resistivity imaging technique is a geo-electrical method used to image the subsurface, water bearing zone, bedrock and layer thickness. Integrated approach combining seismic refraction and 2-D resistivity imaging will provide a better and reliable picture of the subsurface. These are economical and less time-consuming field survey which provide high resolution image of the subsurface. Geophysical surveys carried out in this study include seismic refraction and 2D resistivity imaging method for delineation of sub-surface strata in different parts of Srinagar, Garhwal Himalaya, India. The aim of this survey was to map the shallow subsurface in terms of geological and geophysical properties mainly P-wave velocity, resistivity, layer thickness, and lithology of the area. Both sides of the river, Alaknanda which flows through the centre of the city, have been covered by taking two profiles on each side using both methods. Seismic and electrical surveys were carried out at the same locations to complement the results of each other. The seismic refraction survey was carried out using ABEM TeraLoc 24 channel Seismograph and 2D resistivity imaging was performed using ABEM Terrameter LS equipment. The results show three distinct layers on both sides of the river up to the depth of 20 m. The subsurface is divided into three distinct layers namely, alluvium extending up to, 3 m depth, conglomerate zone lying between the depth of 3 m to 15 m, and compacted pebbles and cobbles beyond 15 m. P-wave velocity in top layer is found in the range of 400 – 600 m/s, in second layer it varies from 700 – 1100 m/s and in the third layer it is 1500 – 3300 m/s. The resistivity results also show similar pattern and were in good agreement with seismic refraction results. The results obtained in this study were validated with an available exposed river scar at one site. The study established the efficacy of geophysical methods for subsurface investigations.

Keywords: 2D resistivity imaging, P-wave velocity, seismic refraction survey, subsurface

Procedia PDF Downloads 260
1254 Education Quality Development for Excellence Performance with Higher Education by Using COBIT 5

Authors: Kemkanit Sanyanunthana

Abstract:

The purpose of this research is to study the management system of information technology which supports the education of five private universities in Thailand, according to the case studies which have been developing their qualities and standards of management and education by service provision of information technology to support the excellence performance. The concept to connect information technology with a suitable system has been created by information technology administrators for development, as a system that can be used throughout the organizations to help reach the utmost benefits of using all resources. Hence, the researcher as a person who has been performing these duties within higher education is interested to do this research by selecting the Control Objective for Information and Related Technology 5 (COBIT 5) for the Malcolm Baldrige National Quality Award (MBNQA) of America, or the National Award which applies the concept of Total Quality Management (TQM) to the organization evaluation. Such evaluation is called the Education Criteria for Performance Excellence (EdPEx) focuses on studying and comparing education quality development for excellent performance using COBIT 5 in terms of information technology to study the problems and obstacles of the investigation process for an information technology system, which is considered as an instrument to drive all organizations to reach the excellence performance of the information technology, and to be the model of evaluation and analysis of the process to be in accordance with the strategic plans of the information technology in the universities. This research is conducted in the form of descriptive and survey research according to the case studies. The data collection were carried out by using questionnaires through the administrators working related to the information technology field, and the research documents related to the change management as the main study. The research can be concluded that the performance based on the APO domain process (ALIGN, PLAN AND ORGANISE) of the COBIT 5 standard frame, which emphasizes concordant governance and management of strategic plans for the organizations, could reach only 95%. This might be because of some restrictions such as organizational cultures; therefore, the researcher has studied and analyzed the management of information technology in universities as a whole, under the organizational structures, to reach the performance in accordance with the overall APO domain which would affect the determined strategic plans to be able to develop based on the excellence performance of information technology, and to apply the risk management system at the organizational level into every performance process which would develop the work effectiveness for the resources management of information technology to reach the utmost benefits. 

Keywords: COBIT5, APO, EdPEx Criteria, MBNQA

Procedia PDF Downloads 327
1253 Calcein Release from Liposomes Mediated by Phospholipase A₂ Activity: Effect of Cholesterol and Amphipathic Di and Tri Blocks Copolymers

Authors: Marco Soto-Arriaza, Eduardo Cena-Ahumada, Jaime Melendez-Rojel

Abstract:

Background: Liposomes have been widely used as a model of lipid bilayer to study the physicochemical properties of biological membrane, encapsulation, transport and release of different molecules. Furthermore, extensive research has focused on improving the efficiency in the transport of drugs, developing tools that improve the release of the encapsulated drug from liposomes. In this context, the enzymatic activity of PLA₂, despite having been shown to be an effective tool to promote the release of drugs from liposomes, is still an open field of research. Aim: The aim of the present study is to explore the effect of cholesterol (Cho) and amphipathic di- and tri-block copolymers, on calcein release mediated by enzymatic activity of PLA2 in Dipalmitoylphosphatidylcholine (DPPC) liposomes under physiological conditions. Methods: Different dispersions of DPPC, cholesterol, di-block POE₄₅-PCL₅₂ or tri-block PCL₁₂-POE₄₅-PCL₁₂ were prepared by the extrusion method after five freezing/thawing cycles; in Phosphate buffer 10mM pH 7.4 in presence of calcein. DPPC liposomes/Calcein were centrifuged at 15000rpm 10 min to separate free calcein. Enzymatic activity assays of PLA₂ were performed at 37°C using the TBS buffer pH 7.4. The size distribution, polydispersity, Z-potential and Calcein encapsulation of DPPC liposomes was monitored. Results: PLA₂ activity showed a slower kinetic of calcein release up to 20 mol% of cholesterol, evidencing a minimum at 10 mol% and then a maximum at 18 mol%. Regardless of the percentage of cholesterol, up to 18 mol% a one-hundred percentage release of calcein was observed. At higher cholesterol concentrations, PLA₂ showed to be inefficient or not to be involved in calcein release. In assays where copolymers were added in a concentration lower than their cmc, a similar behavior to those showed in the presence of Cho was observed, that is a slower kinetic in calcein release. In both experimental approaches, a one-hundred percentage of calcein release was observed. PLA₂ was shown to be sensitive to the 4-(4-Octadecylphenyl)-4-oxobutenoic acid inhibitor and calcium, reducing the release of calcein to 0%. Cell viability of HeLa cells decreased 7% in the presence of DPPC liposomes after 3 hours of incubation and 17% and 23% at 5 and 15 hours, respectively. Conclusion: Calcein release from DPPC liposomes, mediated by PLA₂ activity, depends on the percentage of cholesterol and the presence of copolymers. Both, cholesterol up to 20 mol% and copolymers below it cmc could be applied to the regulation of the kinetics of antitumoral drugs release without inducing cell toxicity per se.

Keywords: amphipathic copolymers, calcein release, cholesterol, DPPC liposome, phospholipase A₂

Procedia PDF Downloads 166
1252 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 147
1251 Casusation and Criminal Responsibility

Authors: László Schmidt

Abstract:

“Post hoc ergo propter hoc” means after it, therefore because of it. In other words: If event Y followed event X, then event Y must have been caused by event X. The question of causation has long been a central theme in philosophical thought, and many different theories have been put forward. However, causality is an essentially contested concept (ECC), as it has no universally accepted definition and is used differently in everyday, scientific, and legal thinking. In the field of law, the question of causality arises mainly in the context of establishing legal liability: in criminal law and in the rules of civil law on liability for damages arising either from breach of contract or from tort. In the study some philosophical theories of causality will be presented and how these theories correlate with legal causality. It’s quite interesting when philosophical abstractions meet the pragmatic demands of jurisprudence. In Hungarian criminal judicial practice the principle of equivalence of conditions is the generally accepted and applicable standard of causation, where all necessary conditions are considered equivalent and thus a cause. The idea is that without the trigger, the subsequent outcome would not have occurred; all the conditions that led to the subsequent outcome are equivalent. In the case where the trigger that led to the result is accompanied by an additional intervening cause, including an accidental one, independent of the perpetrator, the causal link is not broken, but at most the causal link becomes looser. The importance of the intervening causes in the outcome should be given due weight in the imposition of the sentence. According to court practice if the conduct of the offender sets in motion the causal process which led to the result, it does not exclude his criminal liability and does not interrupt the causal process if other factors, such as the victim's illness, may have contributed to it. The concausa does not break the chain of causation, i.e. the existence of a causal link establish the criminal liability of the offender. Courts also adjudicates that if an act is a cause of the result if the act cannot be omitted without the result being omitted. This essentially assumes a hypothetical elimination procedure, i.e. the act must be omitted in thought and then examined to see whether the result would still occur or whether it would be omitted. On the substantive side, the essential condition for establishing the offence is that the result must be demonstrably connected with the activity committed. The provision on the assessment of the facts beyond reasonable doubt must also apply to the causal link: that is to say, the uncertainty of the causal link between the conduct and the result of the offence precludes the perpetrator from being held liable for the result. Sometimes, however, the courts do not specify in the reasons for their judgments what standard of causation they apply, i.e. on what basis they establish the existence of (legal) causation.

Keywords: causation, Hungarian criminal law, responsibility, philosophy of law

Procedia PDF Downloads 42
1250 Reasons for Lack of an Ideal Disinfectant after Dental Treatments

Authors: Ilma Robo, Saimir Heta, Rialda Xhizdari, Kers Kapaj

Abstract:

Background: The ideal disinfectant for surfaces, instruments, air, skin, both in dentistry and in the fields of medicine, does not exist.This is for the sole reason that all the characteristics of the ideal disinfectant cannot be contained in one; these are the characteristics that if one of them is emphasized, it will conflict with the other. A disinfectant must be stable, not be affected by changes in the environmental conditions where it stands, which means that it should not be affected by an increase in temperature or an increase in the humidity of the environment. Both of these elements contradict the other element of the idea of an ideal disinfectant, as they disrupt the solubility ratios of the base substance of the disinfectant versus the diluent. Material and methods: The study aims to extract the constant of each disinfectant/antiseptic used during dental disinfection protocols, accompanied by the side effects of the surface of the skin or mucosa where it is applied in the role of antiseptic. In the end, attempts were made to draw conclusions about the best possible combination for disinfectants after a dental procedure, based on the data extracted from the basic literature required during the development of the pharmacology module, as a module in the formation of a dentist, against data published in the literature. Results: The sensitivity of the disinfectant to changes in the atmospheric conditions of the environment where it is kept is a known fact. The care against this element is always accompanied by the advice on the application of the specific disinfectant, in order to have the desired clinical result. The constants of disinfectants according to the classification based on the data collected and presented are for alcohols 70-120, glycols 0.2, aldehydes 30-200, phenols 15-60, acids 100, povidone iodine halogens 5-75, hypochlorous acid halogens 150, sodium hypochlorite halogens 30-35, oxidants 18-60, metals 0.2-10. The part of halogens should be singled out, where specific results were obtained according to the representatives of this class, since it is these representatives that find scope for clinical application in dentistry. Conclusions: The search for the "ideal", in the conditions where its defining criteria are also established, not only for disinfectants but also for any medication or pharmaceutical product, is an ongoing search, without any definitive results. In this mine of data in the published literature if there is something fixed, calculable, such as the specific constant for disinfectants, the search for the ideal is more concrete. During the disinfection protocols, different disinfectants are applied since the field of action is different, including water, air, aspiration devices, tools, disinfectants used in full accordance with the production indications.

Keywords: disinfectant, constant, ideal, side effects

Procedia PDF Downloads 72
1249 Culturally Relevant Education Challenges and Threats in the US Secondary Classroom

Authors: Owen Cegielski, Kristi Maida, Danny Morales, Sylvia L. Mendez

Abstract:

This study explores the challenges and threats US secondary educators experience in incorporating culturally relevant education (CRE) practices in their classrooms. CRE is a social justice pedagogical practice used to connect student’s cultural references to academic skills and content, to promote critical reflection, to facilitate cultural competence, and to critique discourses of power and oppression. Empirical evidence on CRE demonstrates positive student educational outcomes in terms of achievement, engagement, and motivation. Additionally, due to the direct focus on uplifting diverse cultures through the curriculum, students experience greater feelings of belonging, increased interest in the subject matter, and stronger racial/ethnic identities. When these teaching practices are in place, educators develop deeper relationships with their students and appreciate the multitude of gifts they (and their families) bring to the classroom environment. Yet, educators regularly report being unprepared to incorporate CRE in their daily teaching practice and identify substantive gaps in their knowledge and skills in this area. Often, they were not exposed to CRE in their educator preparation program, nor do they receive adequate support through school- or district-wide professional development programming. Through a descriptive phenomenological research design, 20 interviews were conducted with a diverse set of secondary school educators to explore the challenges and threats they experience in incorporating CRE practices in their classrooms. The guiding research question for this study is: What are the challenges and threats US secondary educators face when seeking to incorporate CRE practices in their classrooms? Interviews were grounded by the theory of challenge and threat states, which highlights the ways in which challenges and threats are appraised and how resources factor into emotional valence and perception, as well as the potential to meet the task at hand. Descriptive phenomenological data analysis strategies were utilized to develop an essential structure of the educators’ views of challenges and threats in regard to incorporating CRE practices in their secondary classrooms. The attitude of the phenomenological reduction method was adopted, and the data were analyzed through five steps: sense of the whole, meaning units, transformation, structure, and essential structure. The essential structure that emerged was while secondary educators display genuine interest in learning how to successfully incorporate CRE practices, they perceive it to be a challenge (and not a threat) due to lack of exposure which diminishes educator capacity, comfort, and confidence in employing CRE practices. These findings reveal the value of attending to emotional valence and perception of CRE in promoting this social justice pedagogical practice. Findings also reveal the importance of appropriately resourcing educators with CRE support to ensure they develop and utilize this practice.

Keywords: culturally relevant education, descriptive phenomenology, social justice practice, US secondary education

Procedia PDF Downloads 187
1248 Structure and Magnetic Properties of M-Type Sr-Hexaferrite with Ca, La Substitutions

Authors: Eun-Soo Lim, Young-Min Kang

Abstract:

M-type Sr-hexaferrite (SrFe₁₂O₁₉) have been studied during the past decades because it is the most utilized materials in permanent magnets due to their low price, outstanding chemical stability, and appropriate hard magnetic properties. Many attempts have been made to improve the intrinsic magnetic properties of M-type Sr-hexaferrites (SrM), such as by improving the saturation magnetization (MS) and crystalline anisotropy by cation substitution. It is well proved that the Ca-La-Co substitutions are one of the most successful approaches, which lead to a significant enhancement in the crystalline anisotropy without reducing MS, and thus the Ca-La-Co-doped SrM have been commercialized in high-grade magnet products. In this research, the effect of respective doping of Ca and La into the SrM lattices were studied with assumptions that these elements could substitute both of Fe and Sr sites. The hexaferrite samples of stoichiometric SrFe₁₂O₁₉ (SrM) and the Ca substituted SrM with formulae of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓCaₓOₐ (x = 0.1, 0.2, 0.3, 0.4), and also La substituted SrM of Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.1, 0.2, 0.3, 0.4) were prepared by conventional solid state reaction processes. X-ray diffraction (XRD) with a Cu Kα radiation source (λ=0.154056 nm) was used for phase analysis. Microstructural observation was conducted with a field emission scanning electron microscopy (FE-SEM). M-H measurements were performed using a vibrating sample magnetometer (VSM) at 300 K. Almost pure M-type phase could be obtained in the all series of hexaferrites calcined at > 1250 ºC. Small amount of Fe₂O₃ phases were detected in the XRD patterns of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.2, 0.3, 0.4) and Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) samples. Also, small amount of unidentified secondary phases without the Fe₂O₃ phase were found in the samples of SrFe₁₂₋ₓCaₓOₐ (x = 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.3, 0.4). Although the Ca substitution (x) into SrM structure did not exhibit a clear tendency in the cell parameter change in both series of samples, Sr₁₋ₓCaₓFe₁₂Oₐ and SrFe₁₂₋ₓCaₓOₐ , the cell volume slightly decreased with doping of Ca in the Sr₁₋ₓCaₓFe₁₂Oₐ samples and increased in the SrFe₁₂₋ₓCaₓOₐ samples. Considering relative ion sizes between Sr²⁺ (0.113 nm), Ca²⁺ (0.099 nm), Fe³⁺ (0.064 nm), these results imply that the Ca substitutes both of Sr and Fe in the SrM. A clear tendency of cell parameter change was observed in case of La substitution into Sr site of SrM ( Sr₁₋ₓLaₓFe₁₂Oₐ); the cell volume decreased with increase of x. It is owing to the similar but smaller ion size of La³⁺ (0.106 nm) than that of Sr²⁺. In case of SrFe₁₂₋ₓLaₓOₐ, the cell volume first decreased at x = 0.1 and then remained almost constant with increase of x from 0.2 to 0.4. These results mean that La only substitutes Sr site in the SrM structure. Besides, the microstructure and magnetic properties of these samples, and correlation between them will be revealed.

Keywords: M-type hexaferrite, substitution, cell parameter, magnetic properties

Procedia PDF Downloads 213
1247 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example

Authors: Svetlana Kostrubina, Anastasia Prokopeva

Abstract:

This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.

Keywords: word formation, affixation, computer terms, Bloom's taxonomy

Procedia PDF Downloads 18
1246 Effect of Different Phosphorus Levels on Vegetative Growth of Maize Variety

Authors: Tegene Nigussie

Abstract:

Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed (spread through the continent by different routes. Maize is an important cereal crop in Ethiopia in general, it is the primarily stable food, and rural households show strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent but low protein content (8-15%). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow, for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference in socio-economic factors lay behind the low rate of technological adoption, including price & marketing input. Objective: The aim of the study is to determine the optimum application rate or level of different phosphorus fertilizers for the vegetative growth of maize and to identify the effect of different phosphorus rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion and Recommendation: Maize is one of the popular and cultivated crops in Ethiopia. This study was conducted to investigate the best dosage of phosphorus for vegetative growth, yield, and better quality of maize variety and to recommend a level of phosphorus rate and the best variety adaptable to the specific soil condition or area.

Keywords: leaf, carbohydrate protein, adoption, sugar

Procedia PDF Downloads 18
1245 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors

Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro

Abstract:

Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.

Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method

Procedia PDF Downloads 249
1244 Critical Understanding on Equity and Access in Higher Education Engaging with Adult Learners and International Student in the Context of Globalisation

Authors: Jin-Hee Kim

Abstract:

The way that globalization distinguishes itself from the previous changes is scope and intensity of changes, which together affect many parts of a nation’s system. In this way, globalization has its relation with the concept of ‘internationalization’ in that a nation state formulates a set of strategies in many areas of its governance to actively react to it. In short, globalization is a ‘catalyst,’ and internationalization is a ‘response’. In this regard, the field of higher education is one of the representative cases that globalization has several consequences that change the terrain of national policy-making. Started and been dominated mainly by the Western world, it has now been expanded to the ‘late movers,’ such as Asia-Pacific countries. The case of internationalization of Korean higher education is, therefore, located in a unique place in this arena. Yet Korea still is one of the major countries of sending its students to the so-called, ‘first world.’ On the other hand, it has started its effort to recruit international students from the world to its higher education system. After new Millennium, particularly, internationalization of higher education has been launched in its full-scale and gradually been one of the important global policy agenda, striving in both ways by opening its turf to foreign educational service providers and recruiting prospective students from other countries. Particularly the latter, recruiting international students, has been highlighted under the government project named ‘Study Korea,’ launched in 2004. Not only global, but also local issues and motivations were based to launch this nationwide project. Bringing international students means various desirable economic outcomes such as reducing educational deficit as well as utilizing them in Korean industry after the completion of their study, to name a few. In addition, in a similar vein, Korea's higher education institutes have started to have a new comers of adult learners. When it comes to the questions regarding the quality and access of this new learning agency, the answer is quite tricky. This study will investigate the different dimension of education provision and learning process to empower diverse group regardless of nationality, race, class and gender in Korea. Listening to the voices of international students and adult learning as non-traditional participants in a changing Korean higher educational space not only benefit students themselves, but Korean stakeholders who should try to accommodate more comprehensive and fair educational provisions for more and more diversifying groups of learners.

Keywords: education equity, access, globalisation, international students, adult learning, learning support

Procedia PDF Downloads 210
1243 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel

Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer

Abstract:

Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.

Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber

Procedia PDF Downloads 128
1242 Conceptualizing Personalized Learning: Review of Literature 2007-2017

Authors: Ruthanne Tobin

Abstract:

As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.

Keywords: curriculum change, educational innovation, personalized learning, school reform

Procedia PDF Downloads 226
1241 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 85
1240 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 207
1239 A Review of Digital Twins to Reduce Emission in the Construction Industry

Authors: Zichao Zhang, Yifan Zhao, Samuel Court

Abstract:

The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.

Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability

Procedia PDF Downloads 106