Search results for: implied adjusted volatility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 986

Search results for: implied adjusted volatility

326 Extent of Derivative Usage, Firm Value and Risk: An Empirical Study on Pakistan Non-Financial Firms

Authors: Atia Alam

Abstract:

Growing liberalisation and intense market competition increase firm’s risk exposure and induce corporations to use derivatives extensively as a risk management instrument, which results in decrease in firm’s risk, and increase in value. Present study contributes towards existing literature by providing an in-depth analysis regarding the effect of extent of derivative usage on firm’s risk and value by using panel data models and seemingly unrelated regression technique. New evidence is established in current literature by dividing the sample data based on firm’s Exchange Rate (ER) and Interest Rate (IR) exposure. Analysis is performed for the effect of extent of derivative usage on firm’s risk and value and its variation with respect to the ER and IR exposure. Sample data consists of 166 Pakistani firms listed on Pakistan stock exchange for the period of 2004-2010. Results show that extensive usage of derivative instruments significantly increases firm value and reduces firm’s risk. Furthermore, comprehensive analysis depicts that Pakistani corporations having higher exchange rate exposure, with respect to foreign sales, and higher interest rate exposure, on the basis of industry adjusted leverage, have higher firm value and lower risk. Findings from seemingly unrelated regression also provide robustness to results obtained through panel data analysis. Study also highlights the role of derivative usage as a risk management instrument in high and low ER and IR risk and helps practitioners in understanding how value increasing effect of extent of derivative usage varies with the intensity of firm’s risk exposure.

Keywords: extent of derivative usage, firm value, risk, Pakistan, non-financial firms

Procedia PDF Downloads 334
325 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: CO2 capture, water desalination, Response Surface Methodology, bubble column reactor

Procedia PDF Downloads 263
324 Influence of Organic Supplements on Shoot Multiplication Efficiency of Phaius tankervilleae var. alba

Authors: T. Punjansing, M. Nakkuntod, S. Homchan, P. Inthima, A. Kongbangkerd

Abstract:

The influence of organic supplements on growth and multiplication efficiency of Phaius tankervilleae var. alba seedlings was investigated. 12 week-old seedlings were cultured on half-strength semi-solid Murashige and Skoog (MS) medium supplemented with 30 g/L sucrose, 8 g/L agar and various concentrations of coconut water (0, 50, 100, 150 and 200 mL/L) combined with potato extract (0, 25 and 50 g/L) and the pH was adjusted to 5.8 prior to autoclaving. The cultures were then kept under constant photoperiod (16 h light: 8 h dark) at 25 ± 2 °C for 12 weeks. The highest number of shoots (3.0 shoots/explant) was obtained when cultured on the medium added with 50 ml/L coconut water and 50 g/L potato extract whereas the highest number of leaves (5.9 leaves/explant) and roots (6.1 roots/explant) could receive on the medium supplemented with 150 ml/L coconut water and 50 g/L potato extract. with 150 ml/L coconut water and 50 g/L potato extract. Additionally, plantlets of P. tankervilleae var. alba were transferred to grow into seven different substrates i.e. soil, sand, coconut husk chip, soil-sand mix (1: 1), soil-coconut husk chip mix (1: 1), sand-coconut husk chip mix (1: 1) and soil-sand-coconut husk chip mix (1: 1: 1) for four weeks. The results found that acclimatized plants showed 100% of survivals when sand, coconut husk chip and sand-coconut husk chip mix are used as substrates. The number of leaves induced by sand-coconut husk chip mix was significantly higher than that planted in other substrates (P > 0.05). Meanwhile, no significant difference in new shoot formation among these substrates was observed (P < 0.05). This precursory developing protocol was likely to be applied for more large scale of plant production as well as conservation of germplasm of this orchid species.

Keywords: organic supplements, acclimatization, Phaius tankervilleae var. alba, orchid

Procedia PDF Downloads 197
323 Women Empowerment, Joint Income Ownership and Planning for Building Household Resilience on Climate Change: The Case of Kilimanjaro Region, Tanzania

Authors: S. I. Mwasha, Z. Robinson, M. Musgrave

Abstract:

Communities, especially in the global south, have been reported to have low adaptive capacity to cope with climate change impacts. As an attempt to improve adaptive capacity, most studies have focused on understanding the access of the household resources which can contribute to resilience against changes. However, little attention has been shown in uncovering how the household resources could be used and their implications to resilience against weather related shocks. By using a case study qualitative study, this project analyzed the trends in livelihoods practices and their implication to social equity. The study was done in three different villages within Kilimanjaro region. Each in different agro ecological zone. Two focus group discussions in two agro-ecological zones were done, one for women and another one for men except in the third zone where focus group participant were combined together (due to unforeseen circumstances). In the focus group discussion, several participatory rural appraisal tools were used to understand trend in crops and animal production and the use in which it is made: climate trends, soil fertility, trees and other livelihoods resources. Data were analyzed using thematic network analysis. Using an amalgam of magnitude (to note weather comments made were positive or negative) and descriptive coding (to note the topic), six basic themes were identified under social equity: individual ownership, family ownership, love and respect, women no education, women access to education as well as women access to loans. The results implied that despite mum and dad in the family providing labor in the agro pastoral activities, there were separations on who own what, as well as individual obligations in the family. Dad owned mostly income creating crops and mum, food crops. therefore, men controlled the economy which made some of them become arrogant and spend money to meet their interests sometimes not taking care of the family. Separation in ownership was reported to contribute to conflicts in the household as well as causing controversy on the use income is spent. Men were reported to use income to promote matriarchy system. However, as women were capacitated through access to education and loans they become closer to their husband and get access to own and plan the income together for the interest of the family. Joint ownership and planning on the household resources were reported to be important if families have to better adapt to climate change. The aim of this study is not to show women empowerment and joint ownership and planning as only remedy for low adaptive capacity. There is the need to understand other practices that either directly or indirectly impacts environmental integrity, food security and economic development for household resilience against changing climate.

Keywords: adaptive capacity, climate change, resilience, women empowerment

Procedia PDF Downloads 145
322 Geosynthetic Containment Systems for Coastal Protection: An Indian Perspective

Authors: Tom Elias, Kiran G. Shirlal

Abstract:

Coastal erosion is one of the major issue faced by maritime countries, globally. More than 1200 km stretch of Indian coastline is marked eroding. There have been numerous attempts to impede the erosion rate and to attain equilibrium beach profiles. High cost and unavailability of natural rocks forced coastal engineers to find alternatives for conventional hard options like seawalls and groynes. Geosynthetic containment systems, emerged in the mid 20th century proved promising in catering coastal protection in countries like Australia, Germany and United States. The present study aims at reviewing Indian timeline of protection works that uses geosynthetic containment systems. Indian exploration regarding geosynthetic containment system dates back to early 2000s. Generally, protection structures use geosynthetics in the form of Geotubes, Geocontainers, and Geobags with Geotubes being most widely used in the form of submerged reefs, seawalls, groynes and breakwaters. Sand and dredged waste are used to fill these containment systems with calculated sand fill ratio. Reviewing the prominent protection works constructed in the east and west coast of India provides an insight into benefits and the difficulties faced by the practical installation. Initially, geosynthetic structures were considered as a temporary protection method prior to the construction of some other hard structure. Later Dahanu, Hamala and Pentha experiences helped in establishing geotubes as an alternative to conventional structures. Nearshore geotubes reefs aimed to attain equilibrium beach served its purpose in Hamala and Dahanu, Maharashtra, while reef constructed at Candolim, Goa underwent serious damage due to Toe Scour. In situ filling by pumping of sand slurry as in case of Shankarpur Seawall, West Bengal remains as a major concern. Geosynthetic systems supplemented by gabions and rock armours improves the wave dissipation, stability and reflection characteristics as implied in Pentha Coast, Odisha, Hazira, Gujarat and Uppada, Andhra Pradesh. Keeping improper design and deliberate destruction by vandals apart, geosynthetic containment systems offer a cost-effective alternative to conventional coastal protection methods in India. Additionally, geosynthetics supports marine growth in its surface which enhances its demand as an eco-friendly material and encourages usage.

Keywords: coastal protection, geotubes, geobags, geocontainers

Procedia PDF Downloads 131
321 The Influence of Dietary Components on Acne; A Case-Control Survey

Authors: Atiya Mahmood, Mubasharah Hanif, Ghazala Butt, Mehwish Zahoor Ahmed

Abstract:

Acne vulgaris affects millions of adults.Despite extensive research, its food related etiology remains elusive. Objective:To assess the correlation between dietary intake and acne through a case-control survey of 300,15-25 year old respondents living in Pakistan. 50 acne patients and 150 age-and ethnicity-matched controls completed a questionnaire.Cases and controls were separated using SPSS-22 and univariate analysis was performed using the chi-square test. p value < 0.05 was considered statistically significant. We used adjusted odds ratios to assess the strength of associations with 95% confidence intervals. Most of the respondents were females(91.3%).Most(48.7%)acne patients were 20-25 yearsold.Acne severity was mild in 50%,moderate in 34%,severe in14%and very severe in 2%.Frequent low-fat foods(p<0.001)(OR=3.22),fat intake(p=0.03)(OR = 1.629),sweet snacks i.e. biscuits and candies etc. (p=0.013) (OR=1.9254), soft drinks(p= 0.045)(OR= 1.9091),butter(p<0.001) (OR= 1.8185),dairy products(p=0.043)(OR=0.624),salty foods(p= 0.011)(OR=1.961),chocolate (p=0.028)(OR=1.669), were associated with increased acne risk.No association was found with consumption of fried foods, desserts, fruit juices, raw fruit, fast food, vegetables, cheese, soy products, salt, and corn. Increased butter and chocolate consumption were linked to more severe forms of acne(p=0.049 and p=0.005 respectively).Most respondents (n=218) considered themselves to have healthy eating habits, indicating they were not educated about the nutritional aspects of acne treatment.Certain food item intake was significantly higher in acne patients to give an association between the two. Further studies must be conducted to develop a causative relationship. Nutrition aawareness is critical to reduce acne.

Keywords: correlation between dietary components and acne, dietary components, acne, nutrition

Procedia PDF Downloads 36
320 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 221
319 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence

Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis

Abstract:

Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.

Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis

Procedia PDF Downloads 118
318 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 33
317 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 250
316 Linguistic Competence Analysis and the Development of Speaking Instructional Material

Authors: Felipa M. Rico

Abstract:

Linguistic oral competence plays a vital role in attaining effective communication. Since the English language is considered as universally used language and has a high demand skill needed in the work-place, mastery is the expected output from learners. To achieve this, learners should be given integrated differentiated tasks which help them develop and strengthen the expected skills. This study aimed to develop speaking instructional supplementary material to enhance the English linguistic competence of Grade 9 students in areas of pronunciation, intonation and stress, voice projection, diction and fluency. A descriptive analysis was utilized to analyze the speaking level of performance of the students in order to employ appropriate strategies. There were two sets of respondents: 178 Grade 9 students selected through a stratified sampling and chosen at random. The other set comprised English teachers who evaluated the usefulness of the devised teaching materials. A teacher conducted a speaking test and activities were employed to analyze the speaking needs of students. Observation and recordings were also used to evaluate the students’ performance. The findings revealed that the English pronunciation of the students was slightly unclear at times, but generally fair. There were lapses but generally they rated moderate in intonation and stress, because of other language interference. In terms of voice projection, students have erratic high volume pitch. For diction, the students’ ability to produce comprehensible language is limited, and as to fluency, the choice of vocabulary and use of structure were severely limited. Based on the students’ speaking needs analyses, the supplementary material devised was based on Nunan’s IM model, incorporating context of daily life and global work settings, considering the principle that language is best learned in the actual meaningful situation. To widen the mastery of skill, a rich learning environment, filled with a variety instructional material tends to foster faster acquisition of the requisite skills for sustained learning and development. The role of IM is to encourage information to stick in the learners’ mind, as what is seen is understood more than what is heard. Teachers say they found the IM “very useful.” This implied that English teachers could adopt the materials to improve the speaking skills of students. Further, teachers should provide varied opportunities for students to get involved in real life situations where they could take turns in asking and answering questions and share information related to the activities. This would minimize anxiety among students in the use of the English language.

Keywords: diction, fluency, intonation, instructional materials, linguistic competence

Procedia PDF Downloads 215
315 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 241
314 A Retrospective Cross-Sectional Study on the Prevalence and Factors Associated with Virological Non-Suppression among HIV-Positive Adult Patients on Antiretroviral Therapy in Woliso Town, Oromia, Ethiopia

Authors: Teka Haile, Behailu Hawulte, Solomon Alemayehu

Abstract:

Background: HIV virological failure still remains a problem in HV/AIDS treatment and care. This study aimed to describe the prevalence and identify the factors associated with viral non-suppression among HIV-positive adult patients on antiretroviral therapy in Woliso Town, Oromia, Ethiopia. Methods: A retrospective cross-sectional study was conducted among 424 HIV-positive patient’s attending antiretroviral therapy (ART) in Woliso Town during the period from August 25, 2020 to August 30, 2020. Data collected from patient medical records were entered into Epi Info version 2.3.2.1 and exported to SPSS version 21.0 for analysis. Logistic regression analysis was done to identify factors associated with viral load non-suppression, and statistical significance of odds ratios were declared using 95% confidence interval and p-value < 0.05. Results: A total of 424 patients were included in this study. The mean age (± SD) of the study participants was 39.88 (± 9.995) years. The prevalence of HIV viral load non-suppression was 55 (13.0%) with 95% CI (9.9-16.5). Second-line ART treatment regimen (Adjusted Odds Ratio (AOR) = 8.98, 95% Confidence Interval (CI): 2.64, 30.58) and routine viral load testing (AOR = 0.01, 95% CI: 0.001, 0.02) were significantly associated with virological non-suppression. Conclusion: Virological non-suppression was high, which hinders the achievement of the third global 95 target. The second-line regimen and routine viral load testing were significantly associated with virological non-suppression. It suggests the need to assess the effectiveness of antiretroviral drugs for epidemic control. It also clearly shows the need to decentralize third-line ART treatment for those patients in need.

Keywords: virological non-suppression, HIV-positive, ART, Woliso town, Ethiopia

Procedia PDF Downloads 122
313 Association of Maternal Age, Ethnicity and BMI with Gestational Diabetes Prevalence in Multi-Racial Singapore

Authors: Nur Atiqah Adam, Mor Jack Ng, Bernard Chern, Kok Hian Tan

Abstract:

Introduction: Gestational diabetes (GDM) is a common pregnancy complication with short and long-term health consequences for both mother and fetus. Factors such as family history of diabetes mellitus, maternal obesity, maternal age, ethnicity and parity have been reported to influence the risk of GDM. In a multi-racial country like Singapore, it is worthwhile to study the GDM prevalences of different ethnicities. We aim to investigate the influence of ethnicity on the racial prevalences of GDM in Singapore. This is important as it may help us to improve guidelines on GDM healthcare services according to significant risk factors unique to Singapore. Materials and Methods: Obstetric cohort data of 926 singleton deliveries in KK Women’s and Children’s Hospital (KKH) from 2011 to 2013 was obtained. Only patients aged 18 and above and without complicated pregnancies or chronic illnesses were targeted. Factors such as ethnicity, maternal age, parity and maternal body mass index (BMI) at booking visit were studied. A multivariable logistic regression model, adjusted for confounders, was used to determine which of these factors are significantly associated with an increased risk of GDM. Results: The overall GDM prevalence rate based on WHO 1999 criteria & at risk screening (race alone not a risk factor) was 8.86%. GDM rates were higher among women above 35 years old (15.96%), obese (15.15%) and multiparous women (10.12%). Indians had a higher GDM rate (13.0 %) compared to the Chinese (9.57%) and Malays (5.20%). However, using multiple logistic regression model, variables that are significantly related to GDM rates were maternal age (p < 0.001) and maternal BMI at booking visit (p = 0.006). Conclusion: Maternal age (p < 0.001) and maternal booking BMI (p = 0.006) are the strongest risk factors for GDM. Ethnicity per se does not seem to have a significant influence on the prevalence of GDM in Singapore (p = 0.064). Hence we should tailor guidelines on GDM healthcare services according to maternal age and booking BMI rather than ethnicity.

Keywords: ethnicity, gestational diabetes, healthcare, pregnancy

Procedia PDF Downloads 204
312 Interaction of Racial and Gender Disparities in Salivary Gland Cancer Survival in the United States: A Surveillance Epidemiology and End Results Study

Authors: Sarpong Boateng, Rohit Balasundaram, Akua Afrah Amoah

Abstract:

Introduction: Racial and Gender disparities have been found to be independently associated with Salivary Gland Cancers (SGCs) survival; however, to our best knowledge, there are no previous studies on the interplay of these social determinants on the prognosis of SGCs. The objective of this study was to examine the joint effect of race and gender on the survival of SGCs. Methods: We analyzed survival outcomes of 13,547 histologically confirmed cases of SGCs using the Surveillance Epidemiology and End Results (SEER) database (2004 to 2015). Multivariable Cox regression analysis and Kaplan-Meier curves were used to estimate hazard ratios (HR) after controlling for age, tumor characteristics, treatment type and year of diagnosis. Results: 73.5% of the participants were whites, 8.5% were blacks, 10.1% were Hispanics and 58.5% were males. Overall, males had poorer survival than females (HR = 1.16, p=0.003). In the adjusted multivariable model, there were no significant differences in survival by race. However, the interaction of gender and race was statistically significant (p=0.01) in Hispanic males. Thus, compared to White females (reference), Hispanic females had significantly better survival (HR=0.53), whiles Hispanic males had worse survival outcomes (HR=1.82) for SGCs. Conclusions: Our results show significant interactions between race and gender, with racial disparities varying across the different genders for SGCs survival. This study indicates that racial and gender differences are crucial factors to be considered in the prognostic counseling and management of patients with SGCs. Biologic factors, tumor genetic characteristics, chemotherapy, lifestyle, environmental exposures, and socioeconomic and dietary factors are potential yet proven reasons that could account for racial and gender differences in the survival of SGCs.

Keywords: salivary, cancer, survival, disparity, race, gender, SEER

Procedia PDF Downloads 164
311 A Psychoanalytic Lens: Unmasked Layers of the Self among Post-Graduate Psychology Students in Surviving the COVID-19 Lockdown

Authors: Sharon Sibanda, Benny Motileng

Abstract:

The World Health Organisation (WHO) identified the Sars-Cov-2 (COVID-19) as a pandemic on the 12ᵗʰ of March 2020, with South Africa recording its first case on the 5ᵗʰ of March 2020. The rapidly spreading virus led the South African government to implement one of the strictest nationwide lockdowns globally, resulting in the closing down of all institutions of higher learning effective March 18ᵗʰ 2020. Thus, this qualitative study primarily aimed to explore whether post-graduate psychology students were in a state of a depleted or cohesive self, post the psychological isolation of COVID-19 risk-adjusted level 5 lockdown. Semi-structured interviews from a qualitative interpretive approach comprising N=6 psychology post-graduate students facilitated a rich understanding of their intra-psychic experiences of the self. Thematic analysis of data gathered from the interviews illuminated how students were forced into the self by the emotional isolation of hard lockdown, with the emergence of core psychic conflict often defended against through external self-object experiences. The findings also suggest that lockdown stripped off this sample of psychology post-graduate students’ defensive escape from the inner self through external self-object distractions. The external self was stripped to the core of the internal self by the isolation of hard lockdown, thereby uncovering the psychic function of roles and defenses amalgamated throughout modern cultural consciousness that dictates self-functioning. The study suggests modelling reflexivity skills in the integration of internal and external self-experience dynamics as part of a training model for continued personal and professional development for psychology students.

Keywords: COVID-19, fragmentation, self-object experience, true/false self

Procedia PDF Downloads 30
310 Locus of Control, Metacognitive Knowledge, Metacognitive Regulation, and Student Performance in an Introductory Economics Course

Authors: Ahmad A. Kader

Abstract:

In the principles of Microeconomics course taught during the Fall Semester 2019, 158out of 179 students participated in the completion of two questionnaires and a survey describing their demographic and academic profiles. The two questionnaires include the 29 items of the Rotter Locus of Control Scale and the 52 items of the Schraw andDennisonMetacognitive Awareness Scale. The 52 items consist of 17 items describing knowledge of cognition and 37 items describing the regulation of cognition. The paper is intended to show the combined influence of locus of control, metacognitive knowledge, and metacognitive regulation on student performance. The survey covers variables that have been tested and recognized in economic education literature, which include GPA, gender, age, course level, race, student classification, whether the course was required or elective, employments, whether a high school economic course was taken, and attendance. Regression results show that of the economic education variables, GPA, classification, whether the course was required or elective, and attendance are the only significant variables in their influence on student grade. Of the educational psychology variables, the regression results show that the locus of control variable has a negative and significant effect, while the metacognitive knowledge variable has a positive and significant effect on student grade. Also, the adjusted R square value increased markedly with the addition of the locus of control, metacognitive knowledge, and metacognitive regulation variables to the regression equation. The t test results also show that students who are internally oriented and are high on the metacognitive knowledge scale significantly outperform students who are externally oriented and are low on the metacognitive knowledge scale. The implication of these results for educators is discussed in the paper.

Keywords: locus of control, metacognitive knowledge, metacognitive regulation, student performance, economic education

Procedia PDF Downloads 98
309 An Integrated Theoretical Framework on Mobile-Assisted Language Learning: User’s Acceptance Behavior

Authors: Gyoomi Kim, Jiyoung Bae

Abstract:

In the field of language education research, there are not many tries to empirically examine learners’ acceptance behavior and related factors of mobile-assisted language learning (MALL). This study is one of the few attempts to propose an integrated theoretical framework that explains MALL users’ acceptance behavior and potential factors. Constructs from technology acceptance model (TAM) and MALL research are tested in the integrated framework. Based on previous studies, a hypothetical model was developed. Four external variables related to the MALL user’s acceptance behavior were selected: subjective norm, content reliability, interactivity, self-regulation. The model was also composed of four other constructs: two latent variables, perceived ease of use and perceived usefulness, were considered as cognitive constructs; attitude toward MALL as an affective construct; behavioral intention to use MALL as a behavioral construct. The participants were 438 undergraduate students who enrolled in an intensive English program at one university in Korea. This particular program was held in January 2018 using the vacation period. The students were given eight hours of English classes each day from Monday to Friday for four weeks and asked to complete MALL courses for practice outside the classroom. Therefore, all participants experienced blended MALL environment. The instrument was a self-response questionnaire, and each construct was measured by five questions. Once the questionnaire was developed, it was distributed to the participants at the final ceremony of the intensive program in order to collect the data from a large number of the participants at a time. The data showed significant evidence to support the hypothetical model. The results confirmed through structural equation modeling analysis are as follows: First, four external variables such as subjective norm, content reliability, interactivity, and self-regulation significantly affected perceived ease of use. Second, subjective norm, content reliability, self-regulation, perceived ease of use significantly affected perceived usefulness. Third, perceived usefulness and perceived ease of use significantly affected attitude toward MALL. Fourth, attitude toward MALL and perceived usefulness significantly affected behavioral intention to use MALL. These results implied that the integrated framework from TAM and MALL could be useful when adopting MALL environment to university students or adult English learners. Key constructs except interactivity showed significant relationships with one another and had direct and indirect impacts on MALL user’s acceptance behavior. Therefore, the constructs and validated metrics is valuable for language researchers and educators who are interested in MALL.

Keywords: blended MALL, learner factors/variables, mobile-assisted language learning, MALL, technology acceptance model, TAM, theoretical framework

Procedia PDF Downloads 205
308 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 69
307 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History

Authors: Carmen Noheda

Abstract:

This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.

Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music

Procedia PDF Downloads 167
306 Layer-By-Layer Deposition of Poly(Ethylene Imine) Nanolayers on Polypropylene Nonwoven Fabric: Electrostatic and Thermal Properties

Authors: Dawid Stawski, Silviya Halacheva, Dorota Zielińska

Abstract:

The surface properties of many materials can be readily and predictably modified by the controlled deposition of thin layers containing appropriate functional groups and this research area is now a subject of widespread interest. The layer-by-layer (lbl) method involves depositing oppositely charged layers of polyelectrolytes onto the substrate material which are stabilized due to strong electrostatic forces between adjacent layers. This type of modification affords products that combine the properties of the original material with the superficial parameters of the new external layers. Through an appropriate selection of the deposited layers, the surface properties can be precisely controlled and readily adjusted in order to meet the requirements of the intended application. In the presented paper a variety of anionic (poly(acrylic acid)) and cationic (linear poly(ethylene imine), polymers were successfully deposited onto the polypropylene nonwoven using the lbl technique. The chemical structure of the surface before and after modification was confirmed by reflectance FTIR spectroscopy, volumetric analysis and selective dyeing tests. As a direct result of this work, new materials with greatly improved properties have been produced. For example, following a modification process significant changes in the electrostatic activity of a range of novel nanocomposite materials were observed. The deposition of polyelectrolyte nanolayers was found to strongly accelerate the loss of electrostatically generated charges and to increase considerably the thermal resistance properties of the modified fabric (the difference in T50% is over 20°C). From our results, a clear relationship between the type of polyelectrolyte layer deposited onto the flat fabric surface and the properties of the modified fabric was identified.

Keywords: layer-by-layer technique, polypropylene nonwoven, surface modification, surface properties

Procedia PDF Downloads 412
305 Existence of Systemic Risk in Turkish Banking Sector: An Evidence from Return Distributions

Authors: İlhami Karahanoglu, Oguz Ceylan

Abstract:

As its well-known definitions; systemic risk refers to whole economic system down-turn movement even collapse together in very severe cases. In fact, it points out the contagion effects of the defaults. Such a risk is can be depicted with the famous Chinese game of falling domino stones. During and after the Bear & Sterns and Lehman Brothers cases, it was well understood that there is a very strong effect of systemic risk in financial services sector. In this study, we concentrate on the existence of systemic risk in Turkish Banking Sector based upon the Halkbank Case during the end month of 2013; there was a political turmoil in Turkey in which the close relatives of the upper politicians were involved in illegal trading activities. In that operation, the CEO of Halkbank was also arrested and in investigation, Halkbank was considered as part of such illegal actions. That operation had an impact on Halkbanks stock value. The Halkbank stock value during that time interval decreased remarkably, the distributional profile of stock return changed and became more volatile as well as more skewed. In this study, the daily returns of 5 leading banks in Turkish banking sector were used to obtain 48 return distributions (for each month, 90-days-back stock value returns are used) of 5 banks for the period 12/2011-12/2013 (pre operation period) and 12/2013-12/2015 (post operation period). When those distributions are compared with timely manner, interestingly; the distribution of the 5 other leading banks in Turkey, public or private, had also distribution profiles which was different from the past 2011-2013 period just like Halkbank. Those 5 big banks, whose stock values are monitored with sub index in Istanbul stock exchange (BIST) as BN10, had more skewed distribution just following the Halkbank stock return movement during the post operation period, with lover mean value and as well higher volatility. In addition, the correlation between the stock value return distributions of the leading banks after Halkbank case, where the returns are more skewed to the left, increased (which is measured in monthly base before and after the operation). The dependence between those banks was stronger under the case where the stock values were falling compared with the normal market condition. Such distributional effect of stock returns between the leading banks in Turkey, which is valid for down sub-market (financial/banking sector) condition, can be evaluated as an evidence for the existence of contagious effect and systemic risk.

Keywords: financial risk, systemic risk, banking sector, return distribution, dependency structure

Procedia PDF Downloads 268
304 Predictors of Post-marketing Regulatory Actions Concerning Hepatotoxicity

Authors: Salwa M. Almomen, Mona A. Almaghrabi, Saja M. Alhabardi, Adel A. Alrwisan

Abstract:

Background: Hepatotoxicity is a major reason for medication withdrawal from the markets. Unfortunately, serious adverse hepatic effects can occur after marketing with limited indicators during clinical development. Therefore, finding possible predictors for hepatotoxicity might guide the monitoring program of various stakeholders. Methods: We examined the clinical review documents for drugs approved in the US from 2011 to 2016 to evaluate their hepatic safety profile. Predictors: we assessed whether these medications meet Hy’s Law with hepatotoxicity grade ≥ 3, labeled hepatic adverse effects at approval, or accelerated approval status. Outcome: post-marketing regulatory action related to hepatotoxicity, including product withdrawal or updates to warning, precaution, or adverse effects sections. Statistical analysis: drugs were included in the analysis from the time of approval until the end of 2019 or the first post-marketing regulatory action related to hepatotoxicity, whichever occurred first. The hazard ratio (HR) was estimated using Cox-regression analysis. Results: We included 192 medications in the study. We classified 48 drugs as having grade ≥ 3 hepatotoxicities, 43 had accelerated approval status, and 74 had labeled information about hepatotoxicity prior to marketing. The adjusted HRs for post-marketing regulatory action for products with grade ≥ 3 hepatotoxicity was 0.61 (95% confidence interval [CI], 0.17-2.23), 0.92 (95%CI, 0.29-2.93) for a drug approved via accelerated approval program, and was 0.91 (95%CI, 0.33-2.56) for drugs with labeled hepatotoxicity information at approval time. Conclusion: This study does not provide conclusive evidence on the association between post-marketing regulatory action and grade ≥ 3 hepatotoxicity, accelerated approval status, or availability of labeled information at approval due to sampling size and channeling bias.

Keywords: accelerated approvals, hepatic adverse effects, drug-induced liver injury, hepatotoxicity predictors, post-marketing withdrawal

Procedia PDF Downloads 134
303 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis

Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame

Abstract:

Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.

Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain

Procedia PDF Downloads 58
302 3D Dentofacial Surgery Full Planning Procedures

Authors: Oliveira M., Gonçalves L., Francisco I., Caramelo F., Vale F., Sanz D., Domingues M., Lopes M., Moreia D., Lopes T., Santos T., Cardoso H.

Abstract:

The ARTHUR project consists of a platform that allows the virtual performance of maxillofacial surgeries, offering, in a photorealistic concept, the possibility for the patient to have an idea of the surgical changes before they are performed on their face. For this, the system brings together several image formats, dicoms and objs that, after loading, will generate the bone volume, soft tissues and hard tissues. The system also incorporates the patient's stereophotogrammetry, in addition to their data and clinical history. After loading and inserting data, the clinician can virtually perform the surgical operation and present the final result to the patient, generating a new facial surface that contemplates the changes made in the bone and tissues of the maxillary area. This tool acts in different situations that require facial reconstruction, however this project focuses specifically on two types of use cases: bone congenital disfigurement and acquired disfiguration such as oral cancer with bone attainment. Being developed a cloud based solution, with mobile support, the tool aims to reduce the decision time window of patient. Because the current simulations are not realistic or, if realistic, need time due to the need of building plaster models, patient rates on decision, rely on a long time window (1,2 months), because they don’t identify themselves with the presented surgical outcome. On the other hand, this planning was performed time based on average estimated values of the position of the maxilla and mandible. The team was based on averages of the facial measurements of the population, without specifying racial variability, so the proposed solution was not adjusted to the real individual physiognomic needs.

Keywords: 3D computing, image processing, image registry, image reconstruction

Procedia PDF Downloads 175
301 Numerical Study of Flapping-Wing Flight of Hummingbird Hawkmoth during Hovering: Longitudinal Dynamics

Authors: Yao Jie, Yeo Khoon Seng

Abstract:

In recent decades, flapping wing aerodynamics has attracted great interest. Understanding the physics of biological flyers such as birds and insects can help improve the performance of micro air vehicles. The present research focuses on the aerodynamics of insect-like flapping wing flight with the approach of numerical computation. Insect model of hawkmoth is adopted in the numerical study with rigid wing assumption currently. The numerical model integrates the computational fluid dynamics of the flow and active control of wing kinematics to achieve stable flight. The computation grid is a hybrid consisting of background Cartesian nodes and clouds of mesh-free grids around immersed boundaries. The generalized finite difference method is used in conjunction with single value decomposition (SVD-GFD) in computational fluid dynamics solver to study the dynamics of a free hovering hummingbird hawkmoth. The longitudinal dynamics of the hovering flight is governed by three control parameters, i.e., wing plane angle, mean positional angle and wing beating frequency. In present work, a PID controller works out the appropriate control parameters with the insect motion as input. The controller is adjusted to acquire desired maneuvering of the insect flight. The numerical scheme in present study is proven to be accurate and stable to simulate the flight of the hummingbird hawkmoth, which has relatively high Reynolds number. The PID controller is responsive to provide feedback to the wing kinematics during the hovering flight. The simulated hovering flight agrees well with the real insect flight. The present numerical study offers a promising route to investigate the free flight aerodynamics of insects, which could overcome some of the limitations of experiments.

Keywords: aerodynamics, flight control, computational fluid dynamics (CFD), flapping-wing flight

Procedia PDF Downloads 325
300 Turin, from Factory City to Talents Power Player: The Role of Private Philanthropy Agents of Innovation in the Revolution of Human Capital Market in the Contemporary Socio-Urban Scenario

Authors: Renato Roda

Abstract:

With the emergence of the so-called 'Knowledge Society', the implementation of policies to attract, grow and retain talents, in an academic context as well, has become critical –both in the perspective of didactics and research and as far as administration and institutional management are concerned. At the same time, the contemporary philanthropic entities/organizations, which are evolving from traditional types of social support towards new styles of aid, envisaged to go beyond mere monetary donations, face the challenge of brand-new forms of complexity in supporting such specific dynamics of the global human capital market. In this sense, it becomes unavoidable for the philanthropic foundation, while carrying out their daily charitable tasks, to resort to innovative ways to facilitate the acquisition and the promotion of talents by academic and research institutions. In order to deepen such a specific perspective, this paper features the case of Turin, former 'factory city' of Italy’s North West, headquarters -and main reference territory- of Italy’s largest and richest private formerly bank-based philanthropic foundation, the Fondazione Compagnia di San Paolo. While it was assessed and classified as 'medium' in the city Global Talent Competitiveness Index (GTCI) of 2020, Turin has nevertheless acquired over the past months status of impact laboratory for a whole series of innovation strategies in the competition for the acquisition of excellence human capital. Leading actors of this new city vision are the foundations with their specifically adjusted financial engagement and a consistent role of stimulus towards innovation for research and education institutions.

Keywords: human capital, post-Fordism, private foundation, war on talents

Procedia PDF Downloads 152
299 The Role of Emotions in Addressing Social and Environmental Issues in Ethical Decision Making

Authors: Kirsi Snellman, Johannes Gartner, , Katja Upadaya

Abstract:

A transition towards a future where the economy serves society so that it evolves within the safe operating space of the planet calls for fundamental changes in the way managers think, feel and act, and make decisions that relate to social and environmental issues. Sustainable decision-making in organizations are often challenging tasks characterized by trade-offs between environmental, social and financial aspects, thus often bringing forth ethical concerns. Although there have been significant developments in incorporating uncertainty into environmental decision-making and measuring constructs and dimensions in ethical behavior in organizations, the majority of sustainable decision-making models are rationalist-based. Moreover, research in psychology indicates that one’s readiness to make a decision depends on the individual’s state of mind, the feasibility of the implied change, and the compatibility of strategies and tactics of implementation. Although very informative, most of this extant research is limited in the sense that it often directs attention towards the rational instead of the emotional. Hence, little is known about the role of emotions in sustainable decision making, especially in situations where decision-makers evaluate a variety of options and use their feelings as a source of information in tackling the uncertainty. To fill this lacuna, and to embrace the uncertainty and perceived risk involved in decisions that touch upon social and environmental aspects, it is important to add emotion to the evaluation when aiming to reach the one right and good ethical decision outcome. This analysis builds on recent findings in moral psychology that associate feelings and intuitions with ethical decisions and suggests that emotions can sensitize the manager to evaluate the rightness or wrongness of alternatives if ethical concerns are present in sustainable decision making. Capturing such sensitive evaluation as triggered by intuitions, we suggest that rational justification can be complemented by using emotions as a tool to tune in to what feels right in making sustainable decisions. This analysis integrates ethical decision-making theories with recent advancements in emotion theories. It determines the conditions under which emotions play a role in sustainability decisions by contributing to a personal equilibrium in which intuition and rationality are both activated and in accord. It complements the rationalist ethics view according to which nothing fogs the mind in decision making so thoroughly as emotion, and the concept of cheater’s high that links unethical behavior with positive affect. This analysis contributes to theory with a novel theoretical model that specifies when and why managers, who are more emotional, are, in fact, more likely to make ethical decisions than those managers who are more rational. It also proposes practical advice on how emotions can convert the manager’s preferences into choices that benefit both common good and one’s own good throughout the transition towards a more sustainable future.

Keywords: emotion, ethical decision making, intuition, sustainability

Procedia PDF Downloads 109
298 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective

Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice

Abstract:

The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.

Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation

Procedia PDF Downloads 258
297 A Protein-Wave Alignment Tool for Frequency Related Homologies Identification in Polypeptide Sequences

Authors: Victor Prevost, Solene Landerneau, Michel Duhamel, Joel Sternheimer, Olivier Gallet, Pedro Ferrandiz, Marwa Mokni

Abstract:

The search for homologous proteins is one of the ongoing challenges in biology and bioinformatics. Traditionally, a pair of proteins is thought to be homologous when they originate from the same ancestral protein. In such a case, their sequences share similarities, and advanced scientific research effort is spent to investigate this question. On this basis, we propose the Protein-Wave Alignment Tool (”P-WAT”) developed within the framework of the France Relance 2030 plan. Our work takes into consideration the mass-related wave aspect of protein biosynthesis, by associating specific frequencies to each amino acid according to its mass. Amino acids are then regrouped within their mass category. This way, our algorithm produces specific alignments in addition to those obtained with a common amino acid coding system. For this purpose, we develop the ”P-WAT” original algorithm, able to address large protein databases, with different attributes such as species, protein names, etc. that allow us to align user’s requests with a set of specific protein sequences. The primary intent of this algorithm is to achieve efficient alignments, in this specific conceptual frame, by minimizing execution costs and information loss. Our algorithm identifies sequence similarities by searching for matches of sub-sequences of different sizes, referred to as primers. Our algorithm relies on Boolean operations upon a dot plot matrix to identify primer amino acids common to both proteins which are likely to be part of a significant alignment of peptides. From those primers, dynamic programming-like traceback operations generate alignments and alignment scores based on an adjusted PAM250 matrix.

Keywords: protein, alignment, homologous, Genodic

Procedia PDF Downloads 85