Search results for: market comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8336

Search results for: market comparison

896 Culturally Relevant Pedagogy: A Cross-Cultural Comparison

Authors: Medha Talpade, Salil Talpade

Abstract:

The intent of this quantitative project was to compare the values and perceptions of students from a predominantly white college (PWI) to those from a historically black college (HBCU) about culturally relevant teaching and learning practices in the academic realm. The reason for interrelating student culture with teaching practices is to enable a pedagogical response to the low retention rates of African American students and first generation Caucasian students in high schools, colleges, and their low rates of social mobility and educational achievement. Culturally relevant pedagogy, according to related research, is deemed rewarding to students, teachers, the local and national community. Critical race theory (CRT) is the main framework used in this project to explain the ubiquity of a culturally relevant pedagogy. The purpose of this quantitative study was to test the critical race theory that relates the presence of the factors associated with culturally relevant teaching strategies with perceived relevance. The culturally relevant teaching strategies were identified based on the recommendations and findings of past research. Participants in this study included approximately 145 students from a HBCU and 55 students from the PWI. A survey consisting of 37 items related to culturally relevant pedagogy was administered. The themes used to construct the items were: Use of culturally-specific examples in class whenever possible; use of culturally-specific presentational models, use of relational reinforcers, and active engagement. All the items had a likert-type response scale. Participants reported their degree of agreement (5-point scale ranging from strongly disagree to strongly agree) and importance (3-point scale ranging from not at all important to very important) with each survey item. A new variable, Relevance was formed based on the multiplicative function of importance and presence of a teaching and learning strategy. A set of six demographic questions were included in the survey. A consent form based on NIH and APA ethical standards was distributed prior to survey administration to the volunteers. Results of a Factor Analyses on the data from the PWI and the HBCU, and a ANOVA indicated significant differences on ‘Relevance’ related to specific themes. Results of this study are expected to inform educational practices and improve teaching and learning outcomes.

Keywords: culturally relevant pedagogy, college students, cross-cultural, applied psychology

Procedia PDF Downloads 431
895 Locating the Role of Informal Urbanism in Building Sustainable Cities: Insights from Ghana

Authors: Gideon Abagna Azunre

Abstract:

Informal urbanism is perhaps the most ubiquitous urban phenomenon in sub-Saharan Africa (SSA) and Ghana specifically. Estimates suggest that about two-fifths of urban dwellers (37.9%) in Ghana live in informal settlements, while two-thirds of the working labour force are within the informal economy. This makes Ghana invariably an ‘informal country.’ Informal urbanism involves economic and housing activities that are – in law or in practice – not covered (or insufficiently covered) by formal regulations. Many urban folks rely on informal urbanism as a survival strategy due to limited formal waged employment opportunities or rising home prices in the open market. In an era of globalizing neoliberalism, this struggle to survive in cities resonates with several people globally. For years now, there have been intense debates on the utility of informal urbanism – both its economic and housing dimensions – in developing sustainable cities. While some scholars believe that informal urbanism is beneficial to the sustainable city development agenda, others argue that it generates unbearable negative consequences and it symbolizes lawlessness and squalor. Consequently, the main aim of this research was to dig below the surface of the narratives to locate the role of informal urbanism in the quest for sustainable cities. The research geographically focused on Ghana and its burgeoning informal sector. Also, both primary and secondary data were utilized for the analysis; Secondary data entailed a synthesis of the fragmented literature on informal urbanism in Ghana, while primary data entailed interviews with informal stakeholders (such as informal settlement dwellers), city authorities, and planners. These two data sets were weaved together to discover the nexus between informal urbanism and the tripartite dimensions of sustainable cities – economic, social, and environmental. The results from the research showed a two-pronged relationship between informal urbanism and the three dimensions of sustainable city development. In other words, informal urbanism was identified to both positively and negatively affect the drive for sustainable cities. On the one hand, it provides employment (particularly to women), supplies households’ basic needs (shelter, health, water, and waste management), and enhances civic engagement. However, on the other hand, it perpetuates social and gender inequalities, insecurity, congestion, and pollution. The research revealed that a ‘black and white’ interpretation and policy approach is incapable of capturing the complexities of informal urbanism. Therefore, trying to eradicate or remove it from the urbanscape because it exhibits some negative consequences means cities will lose their positive contributions. The inverse also holds true. A careful balancing act is necessary to maximize the benefits and minimize the costs. Overall, the research presented a de-colonial theorization of informal urbanism and thus followed post-colonial scholars’ clarion call to African cities to embrace the paradox of informality and find ways to integrate it into the city-building process.

Keywords: informal urbanism, sustainable city development, economic sustainability, social sustainability, environmental sustainability, Ghana

Procedia PDF Downloads 106
894 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability

Authors: Anastasios Ntousas

Abstract:

The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.

Keywords: morphology, orthography, reading ability, reading comprehension

Procedia PDF Downloads 126
893 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 173
892 Study Protocol: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta - A Singapore Case

Authors: Wee Tong Liaw, Elaine Wong Yee Sing

Abstract:

Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. The rationale for implementing a sustained health promoting workplace and participating in SHA is obvious when company management is convinced that healthier employees, business productivity, and profitability are positively correlated. However, performing research or empirical studies on the impact of a sustained health promoting workplace on stock returns are not likely to yield any interests in the absence of a systematic and independent assessment on the comprehensiveness and sustainability of a health promoting workplace in most developed economies. The principles of diversification and mean-variance efficient portfolio in Modern Portfolio Theory developed by Markowitz (1952) laid the foundation for the works of many financial economists and researchers, and among others, the development of the Capital Asset Pricing Model from the work of Sharpe (1964), Lintner (1965) and Mossin (1966), and the Fama-French Three-Factor Model of Fama and French (1992). This research seeks to support the rationale by studying whether there is a significant relationship or impact of a sustained health promoting workplace on the performance of companies listed on the SGX. The research shall form and test hypotheses pertaining to the impact of a sustained health promoting workplace on company’s performances, including stock returns, of companies that participated in the SHA and companies that did not participate in the SHA. In doing so, the research would be able to determine whether corporate and fund manager should consider the significance of a sustained health promoting workplace as a risk factor to explain the stock returns of companies listed on the SGX. With respect to Singapore’s stock market, this research will test the significance and relevance of a health promoting workplace using the Singapore Health Award as a proxy for non-diversifiable risk factor to explain stock returns. This study will examine the significance of a health promoting workplace on a company’s performance and study its impact on stock price performance and beta and examine if it has higher explanatory power than the traditional single factor asset pricing model CAPM (Capital Asset Pricing Model). To study the significance there are three key questions pertinent to the research study. I) Given a choice, would an investor be better off investing in a listed company with a sustained health promoting workplace i.e. a Singapore Health Award’s recipient? II) The Singapore Health Award has four levels of award starting from Bronze, Silver, Gold to Platinum. Would an investor be indifferent to the level of award when investing in a listed company who is a Singapore Health Award’s recipient? III) Would an asset pricing model combining FAMA-French Three Factor Model and ‘Singapore Health Award’ factor be more accurate than single factor Capital Asset Pricing Model and the Three Factor Model itself?

Keywords: asset pricing model, company's performance, stock prices, sustained health promoting workplace

Procedia PDF Downloads 369
891 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy

Authors: Kamlesh Shashikant Dave

Abstract:

The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.

Keywords: FIIs, BSE, sensex, global impact

Procedia PDF Downloads 441
890 Steps of the Pancreatic Differentiation in the Grass Snake (Natrix natrix) Embryos

Authors: Magdalena Kowalska, Weronika Rupik

Abstract:

The pancreas is an important organ present in all vertebrate species. It contains two different tissues, exocrine and endocrine, that act as two glands in one. The development and differentiation of the pancreas in reptiles is poorly known in comparison to other vertebrates. Therefore, the aim of this study was to investigate the particular steps concerning the differentiation of the pancreas in the grass snake (Natrix natrix) embryos. For this, histological methods (including hematoxylin and eosin, and Heidenhain's AZAN staining), transmission electron microscopy and three-dimensional (3D) reconstructions from serial paraffin sections were used. The results of this study indicated that the first step of pancreas development in Natrix was the connection of the two pancreatic buds: dorsal and ventral one. Then, duct walls in both buds started to be remodeled from the multilayered to single-layered epithelium. This remodeling started in the dorsal bud and was simultaneously with the differentiation of the duct lumens which occurred by the cavition. During this process, the cells that had no contact with the mesenchyme underwent cell death named anoikis. These findings indicated that the walls of ducts in the embryonic pancreas of the grass snake were initially formed by the abundant principal and single endocrine cells. Later the basal and goblet cells differentiated. Among the endocrine cells, as the first the B and A cells differentiated, then the D and PP cells. The next step of the pancreatic development was the withdrawing of the endocrine cells from the duct walls to form the pancreatic islets. The endocrine cells and islets were found only in the dorsal part of the pancreas in Natrix embryos what is different than in other vertebrate species. The islets were formed mainly by the A cells. Simultaneously, with the differentiation of the endocrine pancreas, the acinar tissue started to differentiate. The source of the acinar cells were pancreatic ducts similar as in other vertebrates. The acini formation began at the proximal part of the pancreas and went towards the caudal direction. Differentiating pancreatic ducts developed into the branched system that can be divided into extralobular, intralobular, and intercalated ducts, similarly as in other vertebrate species. However, the pattern of branching was different. In conclusions, particular steps of the pancreas differentiation in the grass snake were different than in other vertebrates. It can be supposed that these differences are related to the specific topography of the snake’s internal organs and their taxonomy position. All specimens used in the study were captured according to the Polish regulations concerning the protection of wild species. Permission was granted by the Local Ethics Commission in Katowice (41/2010; 87/2015) and the Regional Directorate for Environmental Protection in Katowice (WPN.6401.257.2015.DC).

Keywords: embryogenesis, organogenesis, pancreas, Squamata

Procedia PDF Downloads 169
889 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia

Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis

Abstract:

The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).

Keywords: colposcopy, diagnostic test, HPV, network meta-analysis

Procedia PDF Downloads 138
888 Technology and the Need for Integration in Public Education

Authors: Eric Morettin

Abstract:

Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.

Keywords: cybersecurity, education, curriculum, teachers

Procedia PDF Downloads 82
887 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
886 Epidemiology of Low Back Pain among Nurses Working in Public Hospitals of Addis Ababa, Ethiopia

Authors: Mengestie Mulugeta Belay, Serebe Abay Gebrie, Biruk Lambbiso Wamisho, Amare Worku

Abstract:

Background: Low back pain (LBP) related to nursing profession, is a very common public health problem throughout the world. Various risk factors have been implicated in the etiology and LBP is assumed to be of multi-factorial origin as individual, work-related and psychosocial factors can contribute to its development. Objectives: To determine the prevalence and to identify risk factors of LBP among nurses working in Addis Ababa City Public Hospitals, Ethiopia, in the year 2015. Settings: Addis Ababa University, Black-Lion (‘Tikur Anbessa’) Hospital-BLH, is the country’s highest tertiary level referral and teaching Hospital. The three departments in connection with this study: Radiology, Pathology and Orthopedics, run undergraduate and residency programs and receive referred patients from all over the country. Methods: A cross-sectional study with internal comparison was conducted throughout the period October-December, 2015. Sample was chosen by simple random sampling technique by taken the lists of nurses from human resource departments as a sampling frame. A well-structured, pre-tested and self-administered questionnaire was used to collect quantifiable information. The questionnaire included socio-demographic, back pain features, consequences of back pain, work-related and psychosocial factors. The collected data was entered into EpiInfo version 3.5.4 and was analyzed by SPSS. A probability level of 0.05 or less and 95% confidence level was used to indicate statistical significance. Ethical clearance was obtained from all respected administrative bodies, Hospitals and study participants. Results: The study included 395 nurses and gave a response rate of 91.9%. The mean age was 30.6 (±8.4) years. Majority of the respondents were female (285, 72.2%). Nearly half of the participants (n=181, 45.8% (95% CI (40.8%- 50.6%))) were complained low back pain. There was statistical significant association between low back pain and working shift, physical activities at work; sleep disturbance and felt little pleasure by doing things. Conclusion: A high prevalence of low back pain was found among nurses working in Addis Ababa Public Hospitals. Recognition and preventive measures like providing resting periods should be taken to reduce the risk of low back pain in nurses working in Public hospitals.

Keywords: low back pain, risk factors, nurses, public hospitals

Procedia PDF Downloads 309
885 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 226
884 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 213
883 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 75
882 An Easy Approach for Fabrication of Macroporous Apatite-Based Bone Cement Used As Potential Trabecular Bone Substitute

Authors: Vimal Kumar Dewangan, T. S. Sampath Kumar, Mukesh Doble, Viju Daniel Varghese

Abstract:

The apatite-based, i.e., calcium-deficient hydroxyapatite (CDHAp) bone cement is well-known potential bone graft/substitute in orthopaedics due to its similar chemical composition with natural bone minerals. Therefore, an easy approach was attempted to fabricate the apatite-based (CDHAp) bone cement with improved injectability, bioresorbability, and macroporosity. In this study, the desired bone cement was developed by mixing the solid phase (consisting of wet chemically synthesized nanocrystalline hydroxyapatite and commercially available (synthetic) tricalcium phosphate) and the liquid phase (consisting of cement binding accelerator with few biopolymers in a dilute acidic solution) along with a liquid porogen as polysorbate or a solid porogen as mannitol (for comparison) in an optimized liquid-to-powder ratio. The fabricated cement sets within clinically preferred setting time (≤20 minutes) are better injectable (>70%) and also stable at ~7.3-7.4 (physiological pH). The CDHAp phased bone cement was resulted by immersing the fabricated after-set cement in phosphate buffer solution and other similar artificial body fluids and incubated at physiological conditions for seven days, confirmed through the X-ray diffraction and Fourier transform-infrared spectroscopy analyses. The so-formed synthetic apatite-based bone cement holds the acceptable compressive strength (within the range of trabecular bone) with average interconnected pores size falls in a macropores range (~50-200μm) inside the cement, verified by scanning electron microscopy (SEM), mercury intrusion porosimetry and micro-CT analysis techniques. Also, it is biodegradable (degrades ~19-22% within 10-12 weeks) when incubated in artificial body fluids under physiological conditions. The biocompatibility study of the bone cement, when incubated with MG63 cells, shows a significant increase in the cell viability after 3rd day of incubation compared with the control, and the cells were well-attached and spread completely on the surface of the bone cement, confirmed through SEM and fluorescence microscopy analyses. With this all, we can conclude that the developed synthetic macroporous apatite-based bone cement may have the potential to become promising material used as a trabecular bone substitute.

Keywords: calcium deficient hydroxyapatite, synthetic apatite-based bone cement, injectability, macroporosity, trabecular bone substitute

Procedia PDF Downloads 83
881 Designing a Thermal Management System for Lithium Ion Battery Packs in Electric Vehicles

Authors: Ekin Esen, Mohammad Alipour, Riza Kizilel

Abstract:

Rechargeable lithium-ion batteries have been replacing lead-acid batteries for the last decade due to their outstanding properties such as high energy density, long shelf life, and almost no memory effect. Besides these, being very light compared to lead acid batteries has gained them their dominant place in the portable electronics market, and they are now the leading candidate for electric vehicles (EVs) and hybrid electric vehicles (HEVs). However, their performance strongly depends on temperature, and this causes some inconveniences for their utilization in extreme temperatures. Since weather conditions vary across the globe, this situation limits their utilization for EVs and HEVs and makes a thermal management system obligatory for the battery units. The objective of this study is to understand thermal characteristics of Li-ion battery modules for various operation conditions and design a thermal management system to enhance battery performance in EVs and HEVs. In the first part of our study, we investigated thermal behavior of commercially available pouch type 20Ah LiFePO₄ (LFP) cells under various conditions. Main parameters were chosen as ambient temperature and discharge current rate. Each cell was charged and discharged at temperatures of 0°C, 10°C, 20°C, 30°C, 40°C, and 50°C. The current rate of charging process was 1C while it was 1C, 2C, 3C, 4C, and 5C for discharge process. Temperatures of 7 different points on the cells were measured throughout charging and discharging with N-type thermocouples, and a detailed temperature profile was obtained. In the second part of our study, we connected 4 cells in series by clinching and prepared 4S1P battery modules similar to ones in EVs and HEVs. Three reference points were determined according to the findings of the first part of the study, and a thermocouple is placed on each reference point on the cells composing the 4S1P battery modules. In the end, temperatures of 6 points in the module and 3 points on the top surface were measured and changes in the surface temperatures were recorded for different discharge rates (0.2C, 0.5C, 0.7C, and 1C) at various ambient temperatures (0°C – 50°C). Afterwards, aluminum plates with channels were placed between the cells in the 4S1P battery modules, and temperatures were controlled with airflow. Airflow was provided with a regular compressor, and the effect of flow rate on cell temperature was analyzed. Diameters of the channels were in mm range, and shapes of the channels were determined in order to make the cell temperatures uniform. Results showed that the designed thermal management system could help keeping the cell temperatures in the modules uniform throughout charge and discharge processes. Other than temperature uniformity, the system was also beneficial to keep cell temperature close to the optimum working temperature of Li-ion batteries. It is known that keeping the temperature at an optimum degree and maintaining uniform temperature throughout utilization can help obtaining maximum power from the cells in battery modules for a longer time. Furthermore, it will increase safety by decreasing the risk of thermal runaways. Therefore, the current study is believed to be beneficial for wider use of Li batteries for battery modules of EVs and HEVs globally.

Keywords: lithium ion batteries, thermal management system, electric vehicles, hybrid electric vehicles

Procedia PDF Downloads 163
880 Prevalence of Rituximab Efficacy Over Immunosuppressants in Therapy of Systemic Sclerosis

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Abstract Objectives. Rituximab (RTX) shown a positive effect in the treatment of systemic sclerosis (SSc). But there is still not enough data on comparing the effectiveness of RTX with immunosuppressants (IS). The aim of our study was to compare changes of lung function and skin score in SSc between two groups of patients (pts) - on RXT therapy (prescribed after ineffectiveness of previous therapy with IS) and on therapy with IS only. Methods. This study included 103 pts received RTX as an addition to previous therapy (group 1) and 65 pts received therapy with IS and prednisolone (group 2). The mean follow-up period was 12.6±10.7months. In group 1 the mean age was 47±12.9 years, female – 88 pts (84%), the diffuse cutaneous subset of the disease had 55 pts (53%). The mean disease duration was 6.2±5.5 years. 82% pts had interstitial lung disease (ILD) and 92% were positive for ANA, 67% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 11.3±4.5 mg/day, IS at inclusion received 47% of them. The cumulative mean dose of RTX was 1.7±0.6 g. In group 2 the mean age was 50.8±13.8 years, female-53 pts (82%), the diffuse cutaneous subset of the disease had 44 pts (68%). The mean disease duration was 8.8±7.7 years. 81% pts had ILD and 88% were positive for ANA, 58% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 8.69±4.28 mg/day, IS received 57% of them. Cyclophosphamide (CP) received 45% of pts. The cumulative mean dose of CP was 10.2±15.1g. D-penicillamine received 30% of pts. Other pts was on mycophenolate mofetil or methotrexate therapy in single cases. The pts of the compared groups did not differ in the main demographic and clinical parameters. The results are presented as delta (Δ) - difference between the baseline parameter and follow up point. Results. In group 1 there was an improvement of all outcome parameters: increased of forced vital capacity, % predicted - ΔFVC=4% (p=0.0004); Diffusing capacity for carbon monoxide, % predicted remained stable (ΔDLCO=0.1%); improvement of the Rodnan skin score-ΔmRss=3.4 (p=0.001); decrease of Activity index (EScSG-AI) - ΔActivity index=1.7 (p=0.001). In group 2 the changes was insignificant: ΔFVC=-2.3%, ΔmRss=0.87, ΔActivity index=0.3. But there was a significant decrease of DLCO: ΔDLCO=-5.1% (p=0.001). Conclusion. The results of our study confirm the data on the positive effect of RTX in complex therapy in pts with SSc (decrease of skin induration, increase of FVC, stabilization of DLCO). Meantime, pts on IS and prednisolone therapy shown the worsening of lung function and insignificant changes of other clinical parameters. RTX could be considered as a more effective option in complex treatment of SSc in comparison with IS therapy

Keywords: immunosuppressants, interstitial lung disease, systemic sclerosis, rituximab

Procedia PDF Downloads 81
879 Law, Resistance, and Development in Georgia: A Case of Namakhvani HPP

Authors: Konstantine Eristavi

Abstract:

The paper will contribute to the discussion on the pitfalls, limits, and possibilities of legal and rights discourse in opposing large infrastructural projects in the context of neoliberal globalisation. To this end, the paper will analyse the struggle against the Namakhvani HPP project in Georgia. The latter has been hailed by the government as one of the largest energy projects in the history of the country, with an enormous potential impact on energy security, energy independence, economic growth, and development. This takes place against the backdrop of decades of market-led -or neoliberal- model of development in Georgia, characterised by structural adjustments, deregulation, privatisation, and Laissez-Fair approach to foreign investment. In this context, the Georgian state vies with other low and middle-income countries for foreign capital by offering to potential investors, on the one hand, exemptions from social and environmental regulations and, on the other hand, huge legal concessions and safeguards, thereby participating in what is often called a “race to the bottom.” The Namakhvani project is a good example of this. At every stage, the project has been marred with violations of laws and regulations concerning transparency, participation, social and environmental regulations, and so on. Moreover, the leaked contract between the state and the developer reveals the contractual safeguards which effectively insulate the investment throughout the duration of the contract from the changes in the national law that might adversely affect investors’ rights and returns. These clauses, aimed at preserving investors' economic position, place the contract above national law in many respects and even conflict with fundamental constitutional rights. In response to the perceived deficiencies of the project, one of the largest and most diverse social movements in the history of post-soviet Georgia has been assembled, consisting of the local population, conservative and leftist groups, human rights and environmental NGOs, etc. Crucially, the resistance movement is actively using legal tools. In order to analyse both the limitations and possibilities of legal discourse, the paper will distinguish between internal and immanent critiques. Law as internal critique, in the context of the struggles around the Namakhvani project, while potentially fruitful in hindering the project, risks neglecting and reproducing those factors -e.g., the particular model of development- that made such contractual concessions and safeguards and concomitant rights violations possible in the first place. On the other hand, the use of rights and law as part of immanent critique articulates a certain incapacity on the part of the addressee government to uphold existing laws and rights due to structural factors, hence, pointing to a need for a fundamental change. This 'ruptural' form of legal discourse that the movement employs makes it possible to go beyond the discussion around the breaches of law and enables a critical deliberation on the development model within which these violations and extraordinary contractual safeguards become necessary. It will be argued that it is this form of immanent critique that expresses the emancipatory potential of legal discourse.

Keywords: law, resistance, development, rights

Procedia PDF Downloads 79
878 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’

Authors: Visvaganthie Moodley

Abstract:

Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.

Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style

Procedia PDF Downloads 169
877 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017

Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola

Abstract:

The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.

Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines

Procedia PDF Downloads 161
876 Estimation of Antiurolithiatic Activity of a Biochemical Medicine, Magnesia phosphorica, in Ethylene Glycol-Induced Nephrolithiasis in Wistar Rats by Urine Analysis, Biochemical, Histopathological, and Electron Microscopic Studies

Authors: Priti S. Tidke, Chandragouda R. Patil

Abstract:

The present study was designed to investigate the effect of Magnesia phosphorica, a biochemical medicine on urine screeing, biochemical, histopathological, and electron microscopic images in ethylene glycol induced nepholithiasis in rats.Male Wistar albino rats were divided into six groups and were orally administered saline once daily (IR-sham and IR-control) or Magnesia phosphorica 100 mg/kg twice daily for 24 days.The effect of various dilutions of biochemical Mag phos3x, 6x, 30x was determined on urine output by comparing the urine volume collected by keeping individual animals in metabolic cages. Calcium oxalate urolithiasis and hyperoxaluria in male Wistar rats was induced by oral administration of 0.75% Ethylene glycol p.o. daily for 24 days. Simultaneous administration of biochemical 3x, 6x, 30xMag phos (100mg/kg p.o. twice a day) along with ethylene glycol significantly decreased calcium oxalate, urea, creatinine, Calcium, Magnesium, Chloride, Phosphorus, Albumin, Alkaline Phosphatase content in urine compared with vehicle-treated control group.After the completion of treatment period animals were sacrificed, kidneys were removed and subjected to microscopic examination for possible stone formation. Histological estimation of kidney treated with biochemical Mag phos (3x, 6x, 30xMag phos 100 mg/kg, p.o.) along with ethylene glycol inhibited the growth of calculi and reduced the number of stones in kidney compared with control group. Biochemical Mag phos of 3x dilution and its crude equivalent also showed potent diuretic and antiurolithiatic activity in ethylene glycol induced urolithiasis. A significant decrease in the weight of stones was observed after treatment in animals which received biochemical Mag phos of 3x dilution and its crude equivalent in comparison with control groups. From this study, it can be proposed that the 3x dilution of biochemical Mag phos exhibits a significant inhibitory effect on crystal growth, with the improvement of kidney function and substantiates claims on the biological activity of twelve tissue remedies which can be proved scientifically through laboratory animal studies.

Keywords: Mag phos, Magnesia phosphorica, ciochemic medicine, urolithiasis, kidney stone, ethylene glycol

Procedia PDF Downloads 424
875 Developing a Sustainable System to Deliver Early Intervention for Emotional Health through Australian Schools

Authors: Rebecca-Lee Kuhnert, Ron Rapee

Abstract:

Up to 15% of Australian youth will experience an emotional disorder, yet relatively few get the help they need. Schools provide an ideal environment through which we can identify young people who are struggling and provide them with appropriate help. Universal mental health screening is a method by which all young people in school can be quickly assessed for emotional disorders, after which identified youth can be linked to appropriate health services. Despite the obvious logic of this process, universal mental health screening has received little scientific evaluation and even less application in Australian schools. This study will develop methods for Australian education systems to help identify young people (aged 9-17 years old) who are struggling with existing and emerging emotional disorders. Prior to testing, a series of focus groups will be run to get feedback and input from young people, parents, teachers, and mental health professionals. They will be asked about their thoughts on school-based screening methods and and how to best help students at risk of emotional distress. Schools (n=91) across New South Wales, Australia will be randomised to do either immediate screening (in May 2021) or delayed screening (in February 2022). Students in immediate screening schools will complete a long online mental health screener consisting of standard emotional health questionnaires. Ultimately, this large set of items will be reduced to a small number of items to form the final brief screener. Students who score in the “at-risk” range on any measure of emotional health problems will be identified to schools and offered pathways to relevant help according to the most accepted and approved processes identified by the focus groups. Nine months later, the same process will occur among delayed screening schools. At this same time, students in the immediate screening schools will complete screening for a second time. This will allow a direct comparison of the emotional health and help-seeking between youth whose schools had engaged in the screening and pathways to care process (immediate) and those whose schools had not engaged in the process (delayed). It is hypothesised that there will be a significant increase in students who receive help from mental health support services after screening, compared with baseline. It is also predicted that all students will show significantly less emotional distress after screening and access to pathways of care. This study will be an important contribution to Australian youth mental health prevention and early intervention by determining whether school screening leads to a greater number of young people with emotional disorders getting the help that they need and improving their mental health outcomes.

Keywords: children and young people, early intervention, mental health, mental health screening, prevention, school-based mental health

Procedia PDF Downloads 95
874 Comparative Study of Urban Structure between an Island-Type and a General-Type City

Authors: Tomoya Oshiro, Hiroko Ono

Abstract:

Japan's aging population is increasing due to the decrease in birthrate. It causes various problems like the decrease in the gross domestic product of the country. The reason is why the local government of Japan has been on the way to a sustainable city recently. Then it is essential to get control of an urban structure to make the compact city successful. There are many kinds of paper about the compact city; however, the paper about a compact city of the island-type city is less. The purpose of this study is to clarify difference of urban structure between an island-type and a general city type. The method which has conducted in this research has two steps. First of all, by using evaluation indexes in the handbook, we evaluated the urban structures among each same -population-class cities from 50,000 to 100,000 people. Next, to clear the difference about the urban structure and feature between island-type and general-type cities compare the radar chart which is composed with each evaluation indexes of urban structure. Moreover, in order to clarify the relationship between evaluation indexes and the place of residence by using GIS software to show up population density on the map. As a result of this research, the management of local government and the local economy in evaluation indexes are indicated to be negative point in comparison of island-type cities with general cities. However, evaluation indexes of safety/security and low-carbon/energy are proved to be positive point. The research to find the difference features of the island-type of urban structure proves that the management of local government or the local economy is negative point in these island-type cities. In addition, the public transportation coverage in Miyako Island, Sado Island, and Amakusa Island show low value compare with other islands and average value. Relationship between evaluation indexes of an urban structure and the place of residence prove that the place of residence is related to public transportation coverage. If the place of residence is spread out, the public transportation coverage will be decreased. The results of this research reveal that the finances in island-type cities are negative point compare to general cities. This problem is caused by declining population. In addition, the place of residence is related to the public transportation coverage. Even though, it needs a much money to increase the public transportation coverage. It is possibly to cause other problems furthermore the aspect of finance is influenced by that as well. The conclusion in this research suggests that it is important for creating the compact city in island-type cities that we first need to address solving the problems about the management of local government and the local economy.

Keywords: sustainable city, comparative analysis, geographic information system, urban structure

Procedia PDF Downloads 150
873 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application

Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar

Abstract:

This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.

Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis

Procedia PDF Downloads 83
872 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 118
871 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1504
870 Characterization of β-Lactamases Resistance amongst Acinetobacter Baumannii Isolated from Clinical Samples, Egypt

Authors: Amal Saafan, Kareem Al Sofy, Sameh AbdelGhani, Magdy Amin

Abstract:

Background: Acinetobacter spp. resistance towards β-lactam antibiotics is mediated mainly by different classes of β-lactamases production; detection of some genes responsible for production of β-lactamases is the objective of the study. Methods: One hundred fifty bacterial isolates were recovered from blood, sputum, and urine specimens from different hospitals in Egypt. Sixty-nine isolate were identified as Acinetobacter baumannii using traditional biochemical tests, CHROM agar, MicroScan and PCR amplification of blaoxa-51like gene. Acinetobacterbaumannii isolates were grouped into carbapenem resistant group (GP1), cefotaxime, ceftazidime and cefoxitin resistant group (GP2) and carbapenem and cephalosporin non-resistant group (GP3). Carbapenemase activity was screened using modified Hodge test (MHT) for GP1.Metallo-β-lactamases screening was performed for MHT positive isolates using double disk synergy test (DDST) and combined disk test (CDT). Amp C activity was screened using Amp C disk test with Tris-EDTA, DDST, and CDT for GP2. Finally, PCR amplification of blaoxa-51like, blaoxa-23like, blaIMP-like, blaVIM-like, and blaADC-like genes was performed for isolates that showed, at least, two positive results of three for both AmpC and carbapenemases phenotypic screening tests (obvious activity), in addition to GP3 (for comparison). Detection of blaoxa-51like and blaADC-like genes preceded by ISAba1 was also performed. Results: Antibiogram of 69 pure Acinetobacter baumannii isolates resulted in 57, 64, and 2 isolates enrolled into GP1, GP2, and GP3, respectively. Carbapenemase activity was shown by 49(85.9%) isolate using MHT. Metallo-β-lactamases screening revealed 32(65.3%) and 35(71.4%) using DDST and CDT, respectively.AmpC activity was shown by 43(67.2%) and 50 (78.1%) isolates using AmpC disk test with Tris-EDTA, and both DDST and CDT, respectively. Twenty-seven isolates showed obvious activity, all of them (100%) were harboring blaoxa-51like and blaADC-like genes, while blaoxa-23like, blaIMP-like andblaVIM-like genes were harbored by 23(85.2%), 9 (33.%) and no isolate respectively. Only 12 (44.4%) isolates harbored blaoxa-51like and blaADC-like genes preceded by ISAba1. GP3 isolates showed only positive blaoxa-51like and blaADC-like genes. Conclusion: It is not possible to correlate resistance with presence of blaoxa-51like and blaADC-like genes and presence of ISAba1 was immediate as transcriptional promoter. A blaoxa-23like gene played an important role in carbapenem resistance when compared with blaIMP-like and blaVIM-like gene.

Keywords: acinetobacter, beta-lactams, resistance, antimicrobial agents

Procedia PDF Downloads 343
869 Increased Efficiency during Oxygen Carrier Aided Combustion of Municipal Solid Waste in an Industrial Scaled Circulating Fluidized Bed-Boiler

Authors: Angelica Corcoran, Fredrik Lind, Pavleta Knutsson, Henrik Thunman

Abstract:

Solid waste volumes are at current predominately deposited on landfill. Furthermore, the impending climate change requires new solutions for a sustainable future energy mix. Currently, solid waste is globally utilized to small extent as fuel during combustion for heat and power production. Due to its variable composition and size, solid waste is considered difficult to combust and requires a technology with high fuel flexibility. One of the commercial technologies used for combustion of such difficult fuels is circulating fluidized beds (CFB). In a CFB boiler, fine particles of a solid material are used as 'bed material', which is accelerated by the incoming combustion air that causes the bed material to fluidize. The chosen bed material has conventionally been silica sand with the main purpose of being a heat carrier, as it transfers heat released by the combustion to the heat-transfer surfaces. However, the release of volatile compounds occurs rapidly in comparison with the lateral mixing in the combustion chamber. To ensure complete combustion a surplus of air is introduced, which decreases the total efficiency of the boiler. In recent years, the concept of partly or entirely replacing the silica sand with an oxygen carrier as bed material has been developed. By introducing an oxygen carrier to the combustion chamber, combustion can be spread out both temporally and spatially in the boiler. Specifically, the oxygen carrier can take up oxygen from the combustion air where it is in abundance and release it to combustible gases where oxygen is in deficit. The concept is referred to as oxygen carrier aided combustion (OCAC) where the natural ore ilmenite (FeTiO3) has been the oxygen carrier used. The authors have validated the oxygen buffering ability of ilmenite during combustion of biomass in Chalmers 12-MWth CFB boiler in previous publications. Furthermore, the concept has been demonstrated on full industrial scale during combustion of municipal solid waste (MSW) in E.ON’s 75 MWth CFB boiler. The experimental campaigns have showed increased mass transfer of oxygen inside the boiler when combustion both biomass and MSW. As a result, a higher degree of burnout is achieved inside the combustion chamber and the plant can be operated at a lower surplus of air. Moreover, the buffer of oxygen provided by the oxygen carrier makes the system less sensitive to disruptions in operation. In conclusion, combusting difficult fuels with OCAC results in higher operation stability and an increase in boiler efficiency.

Keywords: OCAC, ilmenite, combustion, CFB

Procedia PDF Downloads 236
868 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity

Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta

Abstract:

During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.

Keywords: extra vehicular activity, biomechanics, inverse kinematics, human body modeling

Procedia PDF Downloads 341
867 The Covid Pandemic at a Level III Trauma Center: Challenges in the Management of the Spine Trauma.

Authors: Joana PaScoa Pinheiro, David Goncalves Ferreira, Filipe Ramos, Joaquim Soares Do Brito, Samuel Martins, Marco Sarmento

Abstract:

Introduction: The SARS-CoV-2 (COVID-19) pandemic was identified in January 2020 in China, in the city of Wuhan. The increase in the number of cases over the following months was responsible for the restructuring of hospitals and departments in order to accommodate admissions related to COVID-19. Essential services, such as trauma, had to readapt to maintain their functionality and thus guarantee quick and safe access in case of an emergency. Objectives: This study describes the impact of COVID-19 on a Level III Trauma Center and particularly on the clinical management of hospitalized patients with spine injuries. Study Design & Methods: This is a retrospective cohort study whose results were obtained through the medical records of patients with spine injuries who underwent surgical intervention in the years 2019 and 2020 (period from March 1st to December 31st). A comparison between the two groups was made. In the study patients with injuries in the context of trauma were included who underwent surgery in the periods previously described. Patients hospitalized with a spine injury in a non-traumatic context and/or were not surgically treated were excluded. Results: In total, 137 patients underwent trauma spine surgery of which 71 in 2019 (51.8%) were without significant differences in intergroup comparisons. The most frequent injury mechanism in 2019 was motor vehicle crash (47.9%) compared to 2020 which was of a person falling from a height between 2-4 meters (37.9%). Cervical trauma was reported to be the most frequent spine injury in both years. There was a significant decrease in the need for intensive care in 2020, 51.4% vs 30.3%, p = .015 and the number of complications was also lower in 2020 (1.35% vs 0.98%), including the number of deaths, being the difference marginally significant. There were no significant differences regarding time for presentation to surgery or in the total days of hospitalization. Conclusions: The restructuring made in the trauma unit at a Level III Trauma Center in the context of the current COVID-19 pandemic was effective, with no significant differences between the years of 2019 vs 2020 when compared with the time for presentation to surgery or the number of days of hospitalization. It was also found that lockdown rules in 2020 were probably responsible for the decrease in the number of road traffic accidents, which justifies a significant decrease in the need for intensive care as well as in the number of complications in patients hospitalized in the context of spine trauma.

Keywords: trauma, spine, impact, covid-19

Procedia PDF Downloads 254