Search results for: central government constraint and incentive mechanism
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9731

Search results for: central government constraint and incentive mechanism

1061 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: hardness, powder metallurgy, spark plasma sintering, wear

Procedia PDF Downloads 249
1060 Sustainable Development Approach for Coastal Erosion Problem in Thailand: Using Bamboo Sticks to Rehabilitate Coastal Erosion

Authors: Sutida Maneeanakekul, Dusit Wechakit, Somsak Piriyayota

Abstract:

Coastal erosion is a major problem in Thailand, in both the Gulf of Thailand and the Andaman Sea coasts. According to the Department of Marine and Coastal Resources, land erosion occurred along the 200 km coastline with an average rate of 5 meters/year. Coastal erosion affects public and government properties, as well as the socio-economy of the country, including emigration in coastal communities, loss of habitats, and decline in fishery production. To combat the problem of coastal erosion, projects utilizing bamboo sticks for coastal defense against erosion were carried out in 5 areas beginning in November, 2010, including: Pak Klong Munharn- Samut Songkhram Province; Ban Khun Samutmaneerat, Pak Klong Pramong and Chao Matchu Shrine-Samut Sakhon Province,and Pak Klong Hongthong – Chachoengsao Province by Marine and Coastal Resources Department. In 2012, an evaluation of the effectiveness of solving the problem of coastal erosion by using bamboo stick was carried out, with a focus on three aspects. Firstly, the change in physical and biological features after using the bamboo stick technique was assessed. Secondly, participation of people in the community in the way of managing the problem of coastal erosion were these aspects evaluated as part of the study. The last aspect that was evaluated is the satisfaction of the community toward this technique. The results of evaluation showed that the amounts of sediment have dramatically changed behind the bamboo sticks lines. The increase of sediment was found to be about 23.50-56.20 centimeters (during 2012-2013). In terms of biological aspect, there has been an increase in mangrove forest areas, especially at Bang Ya Prak, Samut Sakhon Province. Average tree density was found to be about 4,167 trees per square meter. Additionally, an increase in production of fisheries was observed. Presently, the change in the evaluated physical features tends to increase in every aspect, including the satisfaction of people in community toward the process of solving the erosion problem. People in the community are involved in the preparatory, operation, monitoring and evaluation process to resolve the problem in the medium levels.

Keywords: bamboo sticks, coastal erosion, rehabilitate, Thailand sustainable development approach

Procedia PDF Downloads 222
1059 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 77
1058 Evaluation of Australian Open Banking Regulation: Balancing Customer Data Privacy and Innovation

Authors: Suman Podder

Abstract:

As Australian ‘Open Banking’ allows customers to share their financial data with accredited Third-Party Providers (‘TPPs’), it is necessary to evaluate whether the regulators have achieved the balance between protecting customer data privacy and promoting data-related innovation. Recognising the need to increase customers’ influence on their own data, and the benefits of data-related innovation, the Australian Government introduced ‘Consumer Data Right’ (‘CDR’) to the banking sector through Open Banking regulation. Under Open Banking, TPPs can access customers’ banking data that allows the TPPs to tailor their products and services to meet customer needs at a more competitive price. This facilitated access and use of customer data will promote innovation by providing opportunities for new products and business models to emerge and grow. However, the success of Open Banking depends on the willingness of the customers to share their data, so the regulators have augmented the protection of data by introducing new privacy safeguards to instill confidence and trust in the system. The dilemma in policymaking is that, on the one hand, lenient data privacy laws will help the flow of information, but at the risk of individuals’ loss of privacy, on the other hand, stringent laws that adequately protect privacy may dissuade innovation. Using theoretical and doctrinal methods, this paper examines whether the privacy safeguards under Open Banking will add to the compliance burden of the participating financial institutions, resulting in the undesirable effect of stifling other policy objectives such as innovation. The contribution of this research is three-fold. In the emerging field of customer data sharing, this research is one of the few academic studies on the objectives and impact of Open Banking in the Australian context. Additionally, Open Banking is still in the early stages of implementation, so this research traces the evolution of Open Banking through policy debates regarding the desirability of customer data-sharing. Finally, the research focuses not only on the customers’ data privacy and juxtaposes it with another important objective of promoting innovation, but it also highlights the critical issues facing the data-sharing regime. This paper argues that while it is challenging to develop a regulatory framework for protecting data privacy without impeding innovation and jeopardising yet unknown opportunities, data privacy and innovation promote different aspects of customer welfare. This paper concludes that if a regulation is appropriately designed and implemented, the benefits of data-sharing will outweigh the cost of compliance with the CDR.

Keywords: consumer data right, innovation, open banking, privacy safeguards

Procedia PDF Downloads 128
1057 A Study on the Current Challenges Hindering Urban Park Development in Ulaanbaatar City, Mongolia

Authors: Bayarmaa Enkhbold, Kenichi Matsui

Abstract:

Urban parks are important assets to every community in terms of providing space for health, cultural and leisure activities. However, Ulaanbaatar, the capital of Mongolia, faces a shortage of green spaces, particularly urban parks, due to overpopulation and haphazard growth. Therefore, in order to increase green space per person, the city government has planned to increase green space per person up to 20m² by 2020 and 30m² by 2030 by establishing more urban parks throughout the city. But this plan was estimated that it is highly unlikely to reach those goals according to the analysis of the present status of plan implementation because the current amount of green space per person is still 4m². In the past studies globally, city planners and scientists agree that it is highly improbable to develop urban parks and keep maintenance sustainably without reflecting community perceptions and their involvement in the park establishment. Therefore, this research aims to find the challenges which stymie urban park development in Ulaanbaatar city and recommend dealing with the problems. In order to reach the goal, communities’ perceptions about the current challenges and their necessity for urban parks were identified and determined whether they differentiated depending on two different types of residential areas (urban and suburban areas). It also attempted to investigate international good practices on how they deal with similar problems. The research methodology was based on a questionnaire survey among city residents, a document review regarding the involvement of stakeholders, and a literature review of relevant past studies. According to the residents’ perceptions, the biggest challenge was a lack of land availability and followed by a lack of proper policy, planning, management, and maintenance out of seven key challenges identified. The biggest community demand from the urban park was a playground for children and followed by recreation and relaxation out of six types of needs. Based on research findings, the study proposed several recommendations for enhancements as institutional and legal framework, park plan and management, supportive environment and monitoring, evaluation, and reporting.

Keywords: challenges of urban park planning and maintenance, community-based urban park establishment, community perceptions and participation, urban parks in Ulaanbaatar, Mongolia

Procedia PDF Downloads 105
1056 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis

Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame

Abstract:

Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.

Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain

Procedia PDF Downloads 65
1055 On the Development of Evidential Contrasts in the Greater Himalayan Region

Authors: Marius Zemp

Abstract:

Evidentials indicate how the speaker obtained the information conveyed in a statement. Detailed diachronic-functional accounts of evidential contrasts found in the Greater Himalayan Region (GHR) reveal that contrasting evidentials are not only defined against each other but also that most of them once had different aspecto-temporal (TA) values which must have aligned when their contrast was conventionalized. Based on these accounts, the present paper sheds light on hitherto unidentified mechanisms of grammatical change. The main insights of the present study were facilitated by ‘functional reconstruction’, which (i) revolves around morphemes which appear to be used in divergent ways within a language and/or across different related languages, (ii) persistently devises hypotheses as to how these functional divergences may have developed, and (iii) retains those hypotheses which most plausibly and economically account for the data. Based on the dense and detailed grammatical literature on the Tibetic language family, the author of this study is able to reconstruct the initial steps by which its evidentiality systems developed: By the time Proto-Tibetan started to be spread across much of Central Asia in the 7th century CE, verbal concatenations with and without a connective -s had become common. As typical for resultative constructions around the globe, Proto-Tibetan *V-s-’dug ‘was there, having undergone V’ (employing the simple past of ’dug ‘stay, be there’) allowed both for a perfect reading (‘the state resulting from V holds at the moment of speech’) and an inferential reading (‘(I infer from its result that) V has taken place’). In Western Tibetic, *V-s-’dug grammaticalized in its perfect meaning as it became contrasted with perfect *V-s-yod ‘is there, having undergone V’ (employing the existential copula yod); that is, *V-s-’dug came to mean that the speaker directly witnessed the profiled result of V, whereas *V-s-yod came to mean that the speaker does not depend on direct evidence of the result, as s/he simply knows that it holds. In Eastern Tibetic, on the other hand, V-s-’dug grammaticalized in its inferential past meaning as it became contrasted with past *V-thal ‘went past V-ing’ (employing the simple past of thal ‘go past’); that is, *V-s-’dug came to mean that the profiled past event was inferred from its result, while *V-thal came to mean that it was directly witnessed. Hence, depending on whether it became contrasted with a perfect or a past construction, resultative V-s-’dug grammaticalized either its direct evidential perfect or its inferential past function. This means that in both cases, evidential readings of constructions with distinct but overlapping TA-values became contrasted, and in order for their contrasting meanings to grammaticalize, the constructions had to agree on their tertium comparationis, which was their shared TA-value. By showing that other types of evidential contrasts in the GHR are also TA-aligned, while no single markers (or privative contrasts) are found to have grammaticalized evidential functions, the present study suggests that, at least in this region of the world, evidential meanings grammaticalize only in equipollent contrasts, which always end up TA-aligned.

Keywords: evidential contrasts, functional-diachronic accounts, grammatical change, himalayan languages, tense/aspect-alignment

Procedia PDF Downloads 115
1054 Intellectual Property Rights Reforms and the Quality of Exported Goods

Authors: Gideon Ndubuisi

Abstract:

It is widely acknowledged that the quality of a country’s export matters more decisively than the quantity it exports. Hence, understanding the drivers of exported goods’ quality is a relevant policy question. Among other things, product quality upgrading is a considerable cost uncertainty venture that can be undertaken by an entrepreneur. Once a product is successfully upgraded, however, others can imitate the product, and hence, the returns to the pioneer entrepreneur are socialized. Along with this line, a government policy such as intellectual property rights (IPRs) protection which lessens the non-appropriability problem and incentivizes cost discovery investments becomes both a panacea in addressing the market failure and a sine qua non for an entrepreneur to engage in product quality upgrading. In addendum, product quality upgrading involves complex tasks which often require a lot of knowledge and technology sharing beyond the bounds of the firm thereby creating rooms for knowledge spillovers and imitations. Without an institution that protects upstream suppliers of knowledge and technology, technology masking occurs which bids up marginal production cost and product quality fall. Despite these clear associations between IPRs and product quality upgrading, the surging literature on the drivers of the quality of exported goods has proceeded almost in isolation of IPRs protection as a determinant. Consequently, the current study uses a difference-in-difference method to evaluate the effects of IPRs reforms on the quality of exported goods in 16 developing countries over the sample periods of 1984-2000. The study finds weak evidence that IPRs reforms increase the quality of all exported goods. When the industries are sorted into high and low-patent sensitive industries, however, we find strong indicative evidence that IPRs reform increases the quality of exported goods in high-patent sensitive sectors both in absolute terms and relative to the low-patent sensitive sectors in the post-reform period. We also obtain strong indicative evidence that it brought the quality of exported goods in the high-patent sensitive sectors closer to the quality frontier. Accounting for time-duration effects, these observed effects grow over time. The results are also largely consistent when we consider the sophistication and complexity of exported goods rather than just quality upgrades.

Keywords: exports, export quality, export sophistication, intellectual property rights

Procedia PDF Downloads 105
1053 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback

Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier

Abstract:

This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.

Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning

Procedia PDF Downloads 176
1052 Development of Stretchable Woven Fabrics with Auxetic Behaviour

Authors: Adeel Zulifqar, Hong Hu

Abstract:

Auxetic fabrics are a special kind of textile materials which possess negative Poisson’s ratio. Opposite to most of the conventional fabrics, auxetic fabrics get bigger in the transversal direction when stretched or get smaller when compressed. Auxetic fabrics are superior to conventional fabrics because of their counterintuitive properties, such as enhanced porosity under the extension, excellent formability to a curved surface and high energy absorption ability. Up till today, auxetic fabrics have been produced based on two approaches. The first approach involves using auxetic fibre or yarn and weaving technology to fabricate auxetic fabrics. The other method to fabricate the auxetic fabrics is by using non-auxetic yarns. This method has gained extraordinary curiosity of researcher in recent years. This method is based on realizing auxetic geometries into the fabric structure. In the woven fabric structure auxetic geometries can be realized by creating a differential shrinkage phenomenon into the fabric structural unit cell. This phenomenon can be created by using loose and tight weave combinations within the unit cell of interlacement pattern along with elastic and non-elastic yarns. Upon relaxation, the unit cell of interlacement pattern acquires a non-uniform shrinkage profile due to different shrinkage properties of loose and tight weaves in designed pattern, and the auxetic geometry is realized. The development of uni-stretch auxetic woven fabrics and bi-stretch auxetic woven fabrics by using this method has already been reported. This study reports the development of another kind of bi-stretch auxetic woven fabric. The fabric is first designed by transforming the auxetic geometry into interlacement pattern and then fabricated, using the available conventional weaving technology and non-auxetic elastic and non-elastic yarns. The tensile tests confirmed that the developed bi-stretch auxetic woven fabrics exhibit negative Poisson’s ratio over a wide range of tensile strain. Therefore, it can be concluded that the auxetic geometry can be realized into the woven fabric structure by creating the phenomenon of differential shrinkage and bi-stretch woven fabrics made of non-auxetic yarns having auxetic behavior and stretchability are possible can be obtained. Acknowledgement: This work was supported by the Research Grants Council of Hong Kong Special Administrative Region Government (grant number 15205514).

Keywords: auxetic, differential shrinkage, negative Poisson's ratio, weaving, stretchable

Procedia PDF Downloads 139
1051 Assessing Empathy of Deliquent Adolescents

Authors: Stephens Oluyemi Adetunji, Nel Norma Margaret, Naidu Narainsamy

Abstract:

Empathy has been identified by researchers to be a crucial factor in helping adolescents to refrain from delinquent behavior. Adolescent delinquent behavior is a social problem that has become a source of concern to parents, psychologists, educators, correctional services, researchers as well as governments of nations. Empathy is a social skill that enables an individual to understand and to share another’s emotional state. An individual with a high level of empathy will avoid any act or behavior that will affect another person negatively. The need for this study is predicated on the fact that delinquent adolescent behavior could lead to adult criminality. This, in the long run, has the potential of resulting in an increase in crime rate thereby threatening public safety. It has therefore become imperative to explore the level of empathy of delinquent adolescents who have committed crime and are awaiting trial. It is the conjecture of this study that knowledge of the empathy level of delinquent adolescents will provide an opportunity to design an intervention strategy to remediate the deficit. This study was therefore designed to determine the level of empathy of delinquent adolescents. In addition, this study provides a better understanding of factors that may prevent adolescents from developing delinquent behavior, in this case, delinquents’ empathy levels. In the case of participants who have a low level of empathy, remediation strategies to improve their empathy level would be designed. Two research questions were raised to guide this study. A mixed methods research design was employed for the study. The sample consists of fifteen male adolescents who are between 13-18 years old with a mean age of 16.5 years old. The participants are adolescents who are awaiting trial. The non-probability sampling technique was used to obtain the sample for the quantitative study while purposive sampling was used in the case of the qualitative study. A self–report questionnaire and structured interview were used to assess the level of empathy of participants. The data obtained was analysed using the simple percentages for the quantitative data and transcribing the qualitative data. The result indicates that most of the participants have low level of empathy. It is also revealed that there is a difference in the empathy level on the basis of whether they are from parents living together and those whose parents are separated. Based on the findings of this study, it is recommended that the level of empathy of participants be improved through training and emphasizing the importance of stimulating family environment for children. It is also recommended that programs such as youth mentoring and youth sheltering be established by the government of South Africa to address the menace of delinquent adolescents.

Keywords: adolescents, behavior, delinquents, empathy

Procedia PDF Downloads 449
1050 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example

Authors: Xiaoqiang Fu, Baihao Li

Abstract:

In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.

Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China

Procedia PDF Downloads 242
1049 Satellite Data to Understand Changes in Carbon Dioxide for Surface Mining and Green Zone

Authors: Carla Palencia-Aguilar

Abstract:

In order to attain the 2050’s zero emissions goal, it is necessary to know the carbon dioxide changes over time either from pollution to attenuations in the mining industry versus at green zones to establish real goals and redirect efforts to reduce greenhouse effects. Two methods were used to compute the amount of CO2 tons in specific mining zones in Colombia. The former by means of NPP with MODIS MOD17A3HGF from years 2000 to 2021. The latter by using MODIS MYD021KM bands 33 to 36 with maximum values of 644 data points distributed in 7 sites corresponding to surface mineral mining of: coal, nickel, iron and limestone. The green zones selected were located at the proximities of the studied sites, but further than 1 km to avoid information overlapping. Year 2012 was selected for method 2 to compare the results with data provided by the Colombian government to determine range of values. Some data was compared with 2022 MODIS energy values and converted to kton of CO2 by using the Greenhouse Gas Equivalencies Calculator by EPA. The results showed that Nickel mining was the least pollutant with 81 kton of CO2 e.q on average and maximum of 102 kton of CO2 e.q. per year, with green zones attenuating carbon dioxide in 103 kton of CO2 on average and 125 kton maximum per year in the last 22 years. Following Nickel, there was Coal with average kton of CO2 per year of 152 and maximum of 188, values very similar to the subjacent green zones with average and maximum kton of CO2 of 157 and 190 respectively. Iron had similar results with respect to 3 Limestone sites with average values of 287 kton of CO2 for mining and 310 kton for green zones, and maximum values of 310 kton for iron mining and 356 kton for green zones. One of the limestone sites exceeded the other sites with an average value of 441 kton per year and maximum of 490 kton per year, eventhough it had higher attenuation by green zones than a close Limestore site (3.5 Km apart): 371 kton versus 281 kton on average and maximum 416 kton versus 323 kton, such vegetation contribution is not enough, meaning that manufacturing process should be improved for the most pollutant site. By comparing bands 33 to 36 for years 2012 and 2022 from January to August, it can be seen that on average the kton of CO2 were similar for mining sites and green zones; showing an average yearly balance of carbon dioxide emissions and attenuation. However, efforts on improving manufacturing process are needed to overcome the carbon dioxide effects specially during emissions’ peaks because surrounding vegetation cannot fully attenuate it.

Keywords: carbon dioxide, MODIS, surface mining, vegetation

Procedia PDF Downloads 85
1048 Natural Mexican Zeolite Modified with Iron to Remove Arsenic Ions from Water Sources

Authors: Maritza Estela Garay-Rodriguez, Mirella Gutierrez-Arzaluz, Miguel Torres-Rodriguez, Violeta Mugica-Alvarez

Abstract:

Arsenic is an element present in the earth's crust and is dispersed in the environment through natural processes and some anthropogenic activities. Naturally released into the environment through the weathering and erosion of sulphides mineral, some activities such as mining, the use of pesticides or wood preservatives potentially increase the concentration of arsenic in air, water, and soil. The natural arsenic release of a geological material is a threat to the world's drinking water sources. In aqueous phase is found in inorganic form, as arsenate and arsenite mainly, the contamination of groundwater by salts of this element originates what is known as endemic regional hydroarsenicism. The International Agency for Research on Cancer (IARC) categorizes the inorganic As within group I, as a substance with proven carcinogenic action for humans. It has been found the presence of As in groundwater in several countries such as Argentina, Mexico, Bangladesh, Canada and the United States. Regarding the concentration of arsenic in drinking water according to the World Health Organization (WHO) and the Environmental Protection Agency (EPA) establish maximum concentrations of 10 μg L⁻¹. In Mexico, in some states as Hidalgo, Morelos and Michoacán concentrations of arsenic have been found in bodies of water around 1000 μg L⁻¹, a concentration that is well above what is allowed by Mexican regulations with the NOM-127- SSA1-1994 that establishes a limit of 25 μg L⁻¹. Given this problem in Mexico, this research proposes the use of a natural Mexican zeolite (clinoptilolite type) native to the district of Etla in the central valley region of Oaxaca, as an adsorbent for the removal of arsenic. The zeolite was subjected to a conditioning with iron oxide by the precipitation-impregnation method with 0.5 M iron nitrate solution, in order to increase the natural adsorption capacity of this material. The removal of arsenic was carried out in a column with a fixed bed of conditioned zeolite, since it combines the advantages of a conventional filter with those of a natural adsorbent medium, providing a continuous treatment, of low cost and relatively easy to operate, for its implementation in marginalized areas. The zeolite was characterized by XRD, SEM/EDS, and FTIR before and after the arsenic adsorption tests, the results showed that the modification methods used are adequate to prepare adsorbent materials since it does not modify its structure, the results showed that with a particle size of 1.18 mm, an initial concentration of As (V) ions of 1 ppm, a pH of 7 and at room temperature, a removal of 98.7% was obtained with an adsorption capacity of 260 μg As g⁻¹ zeolite. The results obtained indicated that the conditioned zeolite is favorable for the elimination of arsenate in water containing up to 1000 μg As L⁻¹ and could be suitable for removing arsenate from pits of water.

Keywords: adsorption, arsenic, iron conditioning, natural zeolite

Procedia PDF Downloads 154
1047 Lattice Twinning and Detwinning Processes in Phase Transformation in Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape memory effect is a peculiar property exhibited by certain alloy systems and based on martensitic transformation, and shape memory properties are closely related to the microstructures of the material. Shape memory effect is linked with martensitic transformation, which is a solid state phase transformation and occurs with the cooperative movement of atoms by means of lattice invariant shears on cooling from high-temperature parent phase. Lattice twinning and detwinning can be considered as elementary processes activated during the transformation. Thermally induced martensite occurs as martensite variants, in self-accommodating manner and consists of lattice twins. Also, this martensite is called the twinned martensite or multivariant martensite. Deformation of shape memory alloys in martensitic state proceeds through a martensite variant reorientation. The martensite variants turn into the reoriented single variants with deformation, and the reorientation process has great importance for the shape memory behavior. Copper based alloys exhibit this property in metastable β- phase region, which has DO3 –type ordered lattice in ternary case at high temperature, and these structures martensiticaly turn into the layered complex structures with lattice twinning mechanism, on cooling from high temperature parent phase region. The twinning occurs as martensite variants with lattice invariant shears in two opposite directions, <110 > -type directions on the {110}- type plane of austenite matrix. Lattice invariant shear is not uniform in copper based ternary alloys and gives rise to the formation of unusual layered structures, like 3R, 9R, or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. The unit cell and periodicity are completed through 18 atomic layers in case of 18R-structure. On the other hand, the deformed material recovers the original shape on heating above the austenite finish temperature. Meanwhile, the material returns to the twinned martensite structures (thermally induced martensite structure) in one way (irreversible) shape memory effect on cooling below the martensite finish temperature, whereas the material returns to the detwinned martensite structure (deformed martensite) in two-way (reversible) shape memory effect. Shortly one can say that the microstructural mechanisms, responsible for the shape memory effect are the twinning and detwinning processes as well as martensitic transformation. In the present contribution, x-ray diffraction, transmission electron microscopy (TEM) and differential scanning calorimetry (DSC) studies were carried out on two copper-based ternary alloys, CuZnAl, and CuAlMn.

Keywords: shape memory effect, martensitic transformation, twinning and detwinning, layered structures

Procedia PDF Downloads 418
1046 Assessment of Routine Health Information System (RHIS) Quality Assurance Practices in Tarkwa Sub-Municipal Health Directorate, Ghana

Authors: Richard Okyere Boadu, Judith Obiri-Yeboah, Kwame Adu Okyere Boadu, Nathan Kumasenu Mensah, Grace Amoh-Agyei

Abstract:

Routine health information system (RHIS) quality assurance has become an important issue, not only because of its significance in promoting a high standard of patient care but also because of its impact on government budgets for the maintenance of health services. A routine health information system comprises healthcare data collection, compilation, storage, analysis, report generation, and dissemination on a routine basis in various healthcare settings. The data from RHIS give a representation of health status, health services, and health resources. The sources of RHIS data are normally individual health records, records of services delivered, and records of health resources. Using reliable information from routine health information systems is fundamental in the healthcare delivery system. Quality assurance practices are measures that are put in place to ensure the health data that are collected meet required quality standards. Routine health information system quality assurance practices ensure that data that are generated from the system are fit for use. This study considered quality assurance practices in the RHIS processes. Methods: A cross-sectional study was conducted in eight health facilities in Tarkwa Sub-Municipal Health Service in the western region of Ghana. The study involved routine quality assurance practices among the 90 health staff and management selected from facilities in Tarkwa Sub-Municipal who collected or used data routinely from 24th December 2019 to 20th January 2020. Results: Generally, Tarkwa Sub-Municipal health service appears to practice quality assurance during data collection, compilation, storage, analysis and dissemination. The results show some achievement in quality control performance in report dissemination (77.6%), data analysis (68.0%), data compilation (67.4%), report compilation (66.3%), data storage (66.3%) and collection (61.1%). Conclusions: Even though the Tarkwa Sub-Municipal Health Directorate engages in some control measures to ensure data quality, there is a need to strengthen the process to achieve the targeted percentage of performance (90.0%). There was a significant shortfall in quality assurance practices performance, especially during data collection, with respect to the expected performance.

Keywords: quality assurance practices, assessment of routine health information system quality, routine health information system, data quality

Procedia PDF Downloads 56
1045 Analysis of Constraints and Opportunities in Dairy Production in Botswana

Authors: Som Pal Baliyan

Abstract:

Dairy enterprise has been a major source of employment and income generation in most of the economies worldwide. Botswana government has also identified dairy as one of the agricultural sectors towards diversification of the mineral dependent economy of the country. The huge gap between local demand and supply of milk and milk products indicated that there are not only constraints but also; opportunities exist in this sub sector of agriculture. Therefore, this study was an attempt to identify constraints and opportunities in dairy production industry in Botswana. The possible ways to mitigate the constraints were also identified. The findings should assist the stakeholders especially, policy makers in the formulation of effective policies for the growth of dairy sector in the country. This quantitative study adopted a survey research design. A final survey followed by a pilot survey was conducted for data collection. The purpose of the pilot survey was to collect basic information on the nature and extent of the constraints, opportunities and ways to mitigate the constraints in dairy production. Based on the information from pilot survey, a four point Likert’s scale type questionnaire was constructed, validated and tested for its reliability. The data for the final survey were collected from purposively selected twenty five dairy farms. The descriptive statistical tools were employed to analyze data. Among the twelve constraints identified; high feed costs, feed shortage and availability, lack of technical support, lack of skilled manpower, high prevalence of pests and diseases and, lack of dairy related technologies were the six major constraints in dairy production. Grain feed production, roughage feed production, manufacturing of dairy feed, establishment of milk processing industry and, development of transportation systems were the five major opportunities among the eight opportunities identified. Increasing production of animal feed locally, increasing roughage feed production locally, provision of subsidy on animal feed, easy access to sufficient financial support, training of the farmers and, effective control of pests and diseases were identified as the six major ways to mitigate the constraints. It was recommended that the identified constraints and opportunities as well as the ways to mitigate the constraints need to be carefully considered by the stakeholders especially, policy makers during the formulation and implementation of the policies for the development of dairy sector in Botswana.

Keywords: dairy enterprise, milk production, opportunities, production constraints

Procedia PDF Downloads 378
1044 Multimodal Ophthalmologic Evaluation Can Detect Retinal Injuries in Asymptomatic Patients With Primary Antiphospholipid Syndrome

Authors: Taurino S. R. Neto, Epitácio D. S. Neto, Flávio Signorelli, Gustavo G. M. Balbi, Alex H. Higashi, Mário Luiz R. Monteiro, Eloisa Bonfá, Danieli C. O. Andrade, Leandro C. Zacharias

Abstract:

Purpose: To perform a multimodal evaluation, including the use of Optical Coherence Angiotomography (OCTA), in patients with primary antiphospholipid syndrome (PAPS) without ocular complaints and to compare them with healthy individuals. Methods: A complete structural and functional ophthalmological evaluation using OCTA and microperimetry (MP) exam in patients with PAPS, followed at a tertiary rheumatology outpatient clinic, was performed. All ophthalmologic manifestations were recorded and then statistical analysis was performed for comparative purposes; p <0.05 was considered statistically significant. Results: 104 eyes of 52 subjects (26 patients with PAPS without ocular complaints and 26 healthy individuals) were included. Among PAPS patients, 21 were female (80.8%) and 21 (80.8%) were Caucasians. Thrombotic PAPS was the main clinical criteria manifestation (100%); 65.4% had venous and 34.6% had arterial thrombosis. Obstetrical criteria were present in 34.6% of all thrombotic PAPS patients. Lupus anticoagulant was present in all patients. 19.2% of PAPS patients presented ophthalmologic findings against none of the healthy individuals. The most common retinal change was paracentral acute middle maculopathy (PAMM) (3 patients, 5 eyes), followed by drusen-like deposits (1 patient, 2 eyes) and pachychoroid pigment epitheliopathy (1 patient, 1 eye). Systemic hypertension and hyperlipidaemia were present in 100% of the PAPS patients with PAMM, while only six patients (26.1%) with PAPS without PAMM presented these two risk factors together. In the quantitative OCTA evaluation, we found significant differences between PAPS patients and controls in both the superficial vascular complex (SVC) and deep vascular complex (DVC) in the high-speed protocol, as well as in the SVC in the high-resolution protocol. In the analysis of the foveal avascular zone (FAZ) parameters, the PAPS group had a larger area of FAZ in the DVC using the high-speed method compared to the control group (p=0.047). In the quantitative analysis of the MP, the PAPS group had lower central (p=0.041) and global (p<0.001) retinal sensitivity compared to the control group, as well as in the sector analysis, with the exception of the inferior sector. In the quantitative evaluation of fixation stability, there was a trend towards worse stability in the PAPS subgroup with PAMM in both studied methods. Conclusions: PAMM was observed in 11.5% of PAPS patients with no previous ocular complaints. Systemic hypertension concomitant with hyperlipidemia was the most commonly associated risk factor for PAMM in patients with PAPS. PAPS patients present lower vascular density and retinal sensitivity compared to the control group, even in patients without PAMM.

Keywords: antiphospholipid syndrome, optical coherence angio tomography, optical coherence tomography, retina

Procedia PDF Downloads 64
1043 Agrowastes to Edible Hydrogels through Bio Nanotechnology Interventions: Bioactive from Mandarin Peels

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits contain an abundance of phytochemicals that can promote health. A substantial amount of agrowaste is produced from the juice processing industries, primarily peels and seeds. This leftover agrowaste is a reservoir of nutraceuticals, particularly bioflavonoids which render it antioxidant and potentially anticancerous. It is, therefore, favorable to utilize this biomass and contribute towards sustainability in a manner that value-added products may be derived from them, nutraceuticals, in this study. However, the pre-systemic metabolism of flavonoids in the gastric phase limits the effectiveness of these bioflavonoids derived from mandarin biomass. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was explored for its flavonoid profile. This work entails supercritical fluid extraction and identification of bioflavonoids from mandarin biomass. Furthermore, to overcome the limitations of these flavonoids in the gastrointestinal tract, a double-layered vehicular mechanism comprising the fabrication of nanoconjugates and edible hydrogels was adopted. Total flavonoids in the mandarin peel extract were estimated by the aluminum chloride complexation method and were found to be 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the abundance of polymethoxyflavones (PMFs), nobiletin and tangeretin as the major flavonoids in the extract, followed by hesperetin and naringenin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which showed an IC50 of 0.55μg/ml. Nanoconjugates were fabricated via the solvent evaporation method, which was further impregnated into hydrogels. Additionally, the release characteristics of nanoconjugate-laden hydrogels in a simulated gastrointestinal environment were studied. The PLGA-PMFs nanoconjugates exhibited a particle size between 200-250nm having a smooth and spherical shape as revealed by FE-SEM. The impregnated alginate hydrogels offered a dense network that ensured the holding of PLGA-PMF nanoconjugates, as confirmed by Cryo-SEM images. Rheological studies revealed the shear-thinning behavior of hydrogels and their high resistance to deformation. Gastrointestinal studies showed a negligible 4.0% release of flavonoids in the gastric phase, followed by a sustained release over the next hours in the intestinal environment. Therefore, based on the enormous potential of recovering nutraceuticals from agro-processing wastes, further augmented by nanotechnological interventions for enhancing the bioefficacy of these compounds, lays the foundation for exploring the path towards the development of value-added products, thereby contributing towards the sustainable use of agrowaste.

Keywords: agrowaste, gastrointestinal, hydrogel, nutraceuticals

Procedia PDF Downloads 76
1042 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature

Authors: Kibrom Hadush

Abstract:

Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.

Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature

Procedia PDF Downloads 129
1041 The Application of Raman Spectroscopy in Olive Oil Analysis

Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli

Abstract:

Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.

Keywords: authentication, chemometrics, olive oil, raman spectroscopy

Procedia PDF Downloads 315
1040 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 117
1039 Effectiveness of the Lacey Assessment of Preterm Infants to Predict Neuromotor Outcomes of Premature Babies at 12 Months Corrected Age

Authors: Thanooja Naushad, Meena Natarajan, Tushar Vasant Kulkarni

Abstract:

Background: The Lacey Assessment of Preterm Infants (LAPI) is used in clinical practice to identify premature babies at risk of neuromotor impairments, especially cerebral palsy. This study attempted to find the validity of the Lacey assessment of preterm infants to predict neuromotor outcomes of premature babies at 12 months corrected age and to compare its predictive ability with the brain ultrasound. Methods: This prospective cohort study included 89 preterm infants (45 females and 44 males) born below 35 weeks gestation who were admitted to the neonatal intensive care unit of a government hospital in Dubai. Initial assessment was done using the Lacey assessment after the babies reached 33 weeks postmenstrual age. Follow up assessment on neuromotor outcomes was done at 12 months (± 1 week) corrected age using two standardized outcome measures, i.e., infant neurological international battery and Alberta infant motor scale. Brain ultrasound data were collected retrospectively. Data were statistically analyzed, and the diagnostic accuracy of the Lacey assessment of preterm infants (LAPI) was calculated -when used alone and in combination with the brain ultrasound. Results: On comparison with brain ultrasound, the Lacey assessment showed superior specificity (96% vs. 77%), higher positive predictive value (57% vs. 22%), and higher positive likelihood ratio (18 vs. 3) to predict neuromotor outcomes at one year of age. The sensitivity of Lacey assessment was lower than brain ultrasound (66% vs. 83%), whereas specificity was similar (97% vs. 98%). A combination of Lacey assessment and brain ultrasound results showed higher sensitivity (80%), positive (66%), and negative (98%) predictive values, positive likelihood ratio (24), and test accuracy (95%) than Lacey assessment alone in predicting neurological outcomes. The negative predictive value of the Lacey assessment was similar to that of its combination with brain ultrasound (96%). Conclusion: Results of this study suggest that the Lacey assessment of preterm infants can be used as a supplementary assessment tool for premature babies in the neonatal intensive care unit. Due to its high specificity, Lacey assessment can be used to identify those babies at low risk of abnormal neuromotor outcomes at a later age. When used along with the findings of the brain ultrasound, Lacey assessment has better sensitivity to identify preterm babies at particular risk. These findings have applications in identifying premature babies who may benefit from early intervention services.

Keywords: brain ultrasound, lacey assessment of preterm infants, neuromotor outcomes, preterm

Procedia PDF Downloads 125
1038 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 296
1037 Teachers’ Role and Principal’s Administrative Functions as Correlates of Effective Academic Performance of Public Secondary School Students in Imo State, Nigeria

Authors: Caroline Nnokwe, Iheanyi Eneremadu

Abstract:

Teachers and principals are vital and integral parts of the educational system. For educational objectives to be met, the role of teachers and the functions of the principals are not to be overlooked. However, the inability of teachers and principals to carry out their roles effectively has impacted the outcome of the students’ performance. The study, therefore, examined teachers’ roles and principal’s administrative functions as correlates of effective academic performance of public secondary school students in Imo state, Nigeria. Four research questions and two hypotheses guided the study. The study adopted a correlation research design. The sample size was 5,438 respondents via the Yaro-Yamane technique, which consists of 175 teachers, 13 principals and 5,250 students using the proportional stratified random sampling technique. The instruments for data collection were a researcher-made questionnaire titled Teachers’ Role/Principals’ Administrative Functions Questionnaire (TRPAFQ) with a Cronbach Alpha coefficient of .82 and student's internal results obtained from the school authorities. Data collected were analyzed using the Pearson product-moment correlation coefficient and simple linear regression. Research questions were answered using Pearson Product Moment Correlation statistics, while the hypotheses were tested at 0.05 level of significance using regression analysis. The findings of the study showed that the educational qualification of teachers, organizing, and planning correlated student’s academic performance to a great extent, while availability and proper use of instructional materials by teachers correlated the academic performance of students to a very high extent. The findings also revealed that there is a significant relationship between teachers’ role, principals’ administrative functions and student’s academic performance of public secondary schools in Imo State, The study recommended among others that there is the need for government, through the ministry of education, and education authorities to adequately staff their supervisory department in order to carry out proper supervision of secondary school teachers, and also provide adequate instructional materials to ensure greater academic performance among secondary school students of Imo state, Nigeria.

Keywords: instructional materials, principals’ administrative functions, students’ academic performance, teacher role

Procedia PDF Downloads 68
1036 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India

Authors: Senthil Kumar Anantharaman

Abstract:

Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.

Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints

Procedia PDF Downloads 148
1035 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors

Authors: Louise Ödlund, Danica Djuric Ilic

Abstract:

Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.

Keywords: energy system, district heating, sustainable business strategies, sustainable development

Procedia PDF Downloads 157
1034 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein

Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel

Abstract:

Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.

Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome

Procedia PDF Downloads 187
1033 Current Status and Influencing Factors of Transition Status of Newly Graduated Nurses in China: A Multi-center Cross-sectional Study

Authors: Jia Wang, Wanting Zhang, Yutong Xv, Zihan Guo, Weiguang Ma

Abstract:

Background: Before becoming qualified nurses, newly graduated nurses(NGNs) must experience a painful transition period, even transition shocks. Transition shocks are public health issues. To address the transition issue of NGNs, many programs or interventions have been developed and implemented. However, there are no studies to understand and assess the transition state of newly graduated nurses from work to life, from external abilities to internal emotions. Aims: Assess the transition status of newly graduated nurses in China. Identify the factors influencing the transition status of newly graduated nurses. Methods: The multi-center cross-sectional study design was adopted. From May 2022 to June 2023, 1261 newly graduated nurse in hospitals were surveyed online with the the Demographic Questionnaire and Transition Status Scale for Newly Graduated Nurses. SPSS 26.0 were used for data input and statistical analysis. Statistic description were adopted to evaluate the demographic characteristics and transition status of NGNs. Independent-samples T-test, Analysis of Variance and Multiple regression analysis was used to explore the influencing factors of transition status. Results: The total average score of Transition Status Scale for Newly Graduated Nurses was 4.00(SD = 0.61). Among the various dimensions of Transition Status, the highest dimension was competence for nursing work, while the lowest dimension was balance between work and life. The results showed factors influencing the transition status of NGNs include taught by senior nurses, night shift status, internship department, attribute of working hospital, province of work and residence, educational background, reasons for choosing nursing, types of hospital, and monthly income. Conclusion: At present, the transition status score of new nurses in China is relatively high, and NGNs are more likely to agree with their own transition status, especially the dimension of competence for nursing work. However, they have a poor level of excess in terms of life-work balance. Nursing managers should reasonably arrange the working hours of NGNs, promote their work-life balance, increase the salary and reward mechanism of NGNs, arrange experienced nursing mentors to teach, optimize the level of hospitals, provide suitable positions for NGNs with different educational backgrounds, pay attention to the culture shock of NGNs from other provinces, etc. Optimize human resource management by intervening in these factors that affect the transition of new nurses and promote a better transition of new nurses.

Keywords: newly graduated nurse, transition, humanistic car, nursing management, nursing practice education

Procedia PDF Downloads 59
1032 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 129