Search results for: weak convergence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1464

Search results for: weak convergence

174 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)

Authors: Arlene López-Sampson, Tony Page, Betsy Jackes

Abstract:

Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.

Keywords: 13C, petiole length, specific leaf area, tree growth

Procedia PDF Downloads 484
173 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 232
172 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 319
171 Serum Neurotrophins in Different Metabolic Types of Obesity

Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Alexander V. Laikov, Alexander V. Shestopalov

Abstract:

Background. Neuropathy is a common complication of obesity. In this regard, the content of neurotrophins in such patients is of particular interest. Neurotrophins are the proteins that regulate neuron survival and neuroplasticity and include brain-derived neurotrophic factor (BDNF) and nerve growth factor (NGF). However, the risk of complications depends on the metabolic type of obesity. Metabolically unhealthy obesity (MUHO) is associated with a high risk of complications, while this is not the case with metabolically healthy obesity (MHO). Therefore, the aim of our work was to study the effect of the obesity metabolic type on serum neurotrophins levels. Patients, materials, methods. The study included 134 healthy donors and 104 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=40) and MUHO (n=55). In the blood serum, the concentration of BDNF and NGF was determined. In addition, the content of adipokines (leptin, asprosin, resistin, adiponectin), myokines (irisin, myostatin, osteocrin), indicators of carbohydrate, and lipid metabolism were measured. Correlation analysis revealed the relationship between the studied parameters. Results. We found that serum BDNF concentration was not different between obese patients and healthy donors, regardless of obesity metabolic type. At the same time, in obese patients, there was a decrease in serum NGF level versus control. A similar trend was characteristic of both MHO and MUHO. However, MUHO patients had a higher NGF level than MHO patients. The literature indicates that obesity is associated with an increase in the plasma concentration of NGF. It can be assumed that in obesity, there is a violation of NGF storage in platelets, which accelerates neurotrophin degradation. We found that BDNF concentration correlated with irisin levels in MUHO patients. Healthy donors had a weak association between NGF and VEGF levels. No such association was found in obese patients, but there was an association between NGF and leptin concentrations. In MHO, the concentration of NHF correlated with the content of leptin, irisin, osteocrin, insulin, and the HOMA-IR index. But in MUHO patients, we found only the relationship between NGF and adipokines (leptin, asprosin). It can be assumed that in patients with MHO, the replenishment of serum NGF occurs under the influence of muscle and adipose tissue. In the MUHO patients only the effect of adipose tissue on NGF was observed. Conclusion. Obesity, regardless of metabolic type, is associated with a decrease in serum NGF concentration. We showed that muscle and adipose tissues make a significant contribution to the serum NGF pool in the MHO patients. In MUHO there is no effect of muscle on the NGF level, but the effect of adipose tissue remains.

Keywords: neurotrophins, nerve growth factor, NGF, brain-derived neurotrophic factor, BDNF, obesity, metabolically healthy obesity, metabolically unhealthy obesity

Procedia PDF Downloads 80
170 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 401
169 Critical Analysis of International Protections for Children from Sexual Abuse and Examination of Indian Legal Approach

Authors: Ankita Singh

Abstract:

Sex trafficking and child pornography are those kinds of borderless crimes which can not be effectively prevented only through the laws and efforts of one country because it requires a proper and smooth collaboration among countries. Eradication of international human trafficking syndicates, criminalisation of international cyber offenders, and effective ban on child pornography is not possible without applying effective universal laws; hence, continuous collaboration of all countries is much needed to adopt and routinely update these universal laws. Congregation of countries on an international platform is very necessary from time to time, where they can simultaneously adopt international agendas and create powerful universal laws to prevent sex trafficking and child pornography in this modern digital era. In the past, some international steps have been taken through The Convention on the Rights of the Child (CRC) and through The Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography, but in reality, these measures are quite weak and are not capable in effectively protecting children from sexual abuse in this modern & highly advanced digital era. The uncontrolled growth of artificial intelligence (AI) and its misuse, lack of proper legal jurisdiction over foreign child abusers and difficulties in their extradition, improper control over international trade of digital child pornographic content, etc., are some prominent issues which can only be controlled through some new, effective and powerful universal laws. Due to a lack of effective international standards and a lack of improper collaboration among countries, Indian laws are also not capable of taking effective actions against child abusers. This research will be conducted through both doctrinal as well as empirical methods. Various literary sources will be examined, and a questionnaire survey will be conducted to analyse the effectiveness of international standards and Indian laws against child pornography. Participants in this survey will be Indian University students. In this work, the existing international norms made for protecting children from sexual abuse will be critically analysed. It will explore why effective and strong collaboration between countries is required in modern times. It will be analysed whether existing international steps are enough to protect children from getting trafficked or being subjected to pornography, and if these steps are not found to be sufficient enough, then suggestions will be given on how international standards and protections can be made more effective and powerful in this digital era. The approach of India towards the existing international standards, the Indian laws to protect children from being subjected to pornography, and the contributions & capabilities of India in strengthening the international standards will also be analysed.

Keywords: child pornography, prevention of children from sexual offences act, the optional protocol to the convention on the rights of the child on the sale of children, child prostitution and child pornography, the convention on the rights of the child

Procedia PDF Downloads 18
168 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 116
167 An Approach to Study the Biodegradation of Low Density Polyethylene Using Microbial Strains of Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence in Different Media Form and Salt Condition

Authors: Monu Ojha, Rahul Rana, Satywati Sharma, Kavya Dashora

Abstract:

The global production rate of plastics has increased enormously and global demand for polyethylene resins –High-density polyethylene (HDPE), Linear low-density polyethylene (LLDPE) and Low-density polyethylene (LDPE) is expected to rise drastically, with very high value. These get accumulated in the environment, posing a potential ecological threat as they are degrading at a very slow rate and remain in the environment indefinitely. The aim of the present study was to investigate the potential of commonly found soil microbes like Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence for their ability to biodegrade LDPE in the lab on solid and liquid media conditions as well as in presence of 1% salt in the soil. This study was conducted at Indian Institute of Technology, Delhi, India from July to September where average temperature and RH (Relative Humidity) were 33 degrees Celcius and 80% respectively. It revealed that the weight loss of LDPE strip obtained from market of approximately 4x6 cm dimensions is more in liquid broth media than in solid agar media. The percentage weight loss by P. fluroscence, A. niger and B. subtilus observed after 80 days of incubation was 15.52, 9.24 and 8.99% respectively in broth media and 6.93, 2.18 and 4.76 % in agar media. The LDPE strips from same source and on the same were subjected to soil in presence of above microbes with 1% salt (NaCl: obtained from commercial table salt) with temperature and RH 33 degree Celcius and 80%. It was found that the rate of degradation increased in the soil than under lab conditions. The rate of weight loss of LDPE strips under same conditions given in lab was found to be 32.98, 15.01 and17.09 % by P. fluroscence, A. niger and B. subtilus respectively. The breaking strength was found to be 9.65N, 29N and 23.85 N for P. fluroscence, A. niger and B. subtilus respectively. SEM analysis conducted on Zeiss EVO 50 confirmed that surface of LDPE becomes physically weak after biological treatment. There was the increase in the surface roughness indicating Surface erosion of LDPE film. FTIR (Fourier-transform infrared spectroscopy) analysis of the degraded LDPE films showed stretching of aldehyde group at 3334.92 and 3228.84 cm-1,, C–C=C symmetric of aromatic ring at 1639.49 cm-1.There was also C=O stretching of aldehyde group at 1735.93 cm-1. N=O peak bend was also observed which corresponds to 1365.60 cm-1, C–O stretching of ether group at 1217.08 and 1078.21 cm-1.

Keywords: microbial degradation, LDPE, Aspergillus niger, Bacillus subtilus, Peudomonas fluroscence, common salt

Procedia PDF Downloads 141
166 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 162
165 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent

Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby

Abstract:

Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.

Keywords: safe prescribing, NHS, digital prescribing, prescription chart

Procedia PDF Downloads 98
164 Health-Related Quality of Life of Caregivers of Institution-Reared Children in Metro Manila: Effects of Role Overload and Role Distress

Authors: Ian Christopher Rocha

Abstract:

This study aimed to determine the association of the quality of life (QOL) of the caregivers of children in need of special protection (CNSP) in child-caring institutions in Metro Manila with the levels of their role overload (RO) and role distress (RD). The CNSP in this study covered the orphaned, abandoned, abused, neglected, exploited, and mentally-challenged children. In this study, the domains of QOL included physical health (PH), psychological health, social health (SH), and living conditions (LC). It also intended to ascertain the association of their personal and work-related characteristics with their RO and RD levels. The respondents of this study were 130 CNSP caregivers in 17 residential child-rearing institutions in Metro Manila. A purposive non-probability sampling was used. Using a quantitative methodological approach, the survey method was utilized to gather data with the use of a self-administered structured questionnaire. Data were analyzed using both descriptive and inferential statistics. Results revealed that the level of RO, the level of RD, and the QOL of the CNSP caregivers were all moderate. Data also suggested that there were significant positive relationships between the RO level and the caregivers’ characteristics, such as age, the number of training, and years of service in the institution. At the same time, the findings revealed that there were significant positive relationships between the RD level and the caregivers’ characteristics, such as age and hours of care rendered to their care recipients. In addition, the findings suggested that all domains of their QOL obtained significant relationships with their RO level. For the correlations of their level of RO and their QOL domains, the PH and the LC obtained a moderate negative correlation with the RO level while the rest of the domains obtained weak negative correlations with RO level. For the correlations of their level of RD and the QOL domains, all domains, except SH, obtained strong negative correlations with the level of RD. The SH revealed to have a moderate negative correlation with RD level. In conclusion, caregivers who are older experience higher levels of RO and RD; caregivers who have more training and years of service experience the higher level of RO; and caregivers who have longer hours of rendered care experience the higher level of RD. In addition, the study affirmed that if the levels of RO and RD are high, the QOL is low, and vice versa. Therefore, the RO and RD levels are reliable predictors of the caregivers’ QOL. In relation, the caregiving situation in the Philippines revealed to be unique and distinct from other countries because the levels of RO and RD and the QOL of Filipino CNSP caregivers were all moderate in contrast with their foreign counterparts who experience high caregiving RO and RD leading to low QOL.

Keywords: quality of life, caregivers, children in need of special protection, physical health, psychological health, social health, living conditions, role overload, role distress

Procedia PDF Downloads 189
163 Transcription Skills and Written Composition in Chinese

Authors: Pui-sze Yeung, Connie Suk-han Ho, David Wai-ock Chan, Kevin Kien-hoa Chung

Abstract:

Background: Recent findings have shown that transcription skills play a unique and significant role in Chinese word reading and spelling (i.e. word dictation), and written composition development. The interrelationships among component skills of transcription, word reading, word spelling, and written composition in Chinese have rarely been examined in the literature. Is the contribution of component skills of transcription to Chinese written composition mediated by word level skills (i.e., word reading and spelling)? Methods: The participants in the study were 249 Chinese children in Grade 1, Grade 3, and Grade 5 in Hong Kong. They were administered measures of general reasoning ability, orthographic knowledge, stroke sequence knowledge, word spelling, handwriting fluency, word reading, and Chinese narrative writing. Orthographic knowledge- orthographic knowledge was assessed by a task modeled after the lexical decision subtest of the Hong Kong Test of Specific Learning Difficulties in Reading and Writing (HKT-SpLD). Stroke sequence knowledge: The participants’ performance in producing legitimate stroke sequences was measured by a stroke sequence knowledge task. Handwriting fluency- Handwriting fluency was assessed by a task modeled after the Chinese Handwriting Speed Test. Word spelling: The stimuli of the word spelling task consist of fourteen two-character Chinese words. Word reading: The stimuli of the word reading task consist of 120 two-character Chinese words. Written composition: A narrative writing task was used to assess the participants’ text writing skills. Results: Analysis of covariance results showed that there were significant between-grade differences in the performance of word reading, word spelling, handwriting fluency, and written composition. Preliminary hierarchical multiple regression analysis results showed that orthographic knowledge, word spelling, and handwriting fluency were unique predictors of Chinese written composition even after controlling for age, IQ, and word reading. The interaction effects between grade and each of these three skills (orthographic knowledge, word spelling, and handwriting fluency) were not significant. Path analysis results showed that orthographic knowledge contributed to written composition both directly and indirectly through word spelling, while handwriting fluency contributed to written composition directly and indirectly through both word reading and spelling. Stroke sequence knowledge only contributed to written composition indirectly through word spelling. Conclusions: Preliminary hierarchical regression results were consistent with previous findings about the significant role of transcription skills in Chinese word reading, spelling and written composition development. The fact that orthographic knowledge contributed both directly and indirectly to written composition through word reading and spelling may reflect the impact of the script-sound-meaning convergence of Chinese characters on the composing process. The significant contribution of word spelling and handwriting fluency to Chinese written composition across elementary grades highlighted the difficulty in attaining automaticity of transcription skills in Chinese, which limits the working memory resources available for other composing processes.

Keywords: orthographic knowledge, transcription skills, word reading, writing

Procedia PDF Downloads 397
162 A Scoping Review of the Relationship Between Oral Health and Wellbeing: The Myth and Reality

Authors: Heba Salama, Barry Gibson, Jennifer Burr

Abstract:

Introduction: It is often argued that better oral health leads to better wellbeing, and the goal of dental care is to improve wellbeing. Notwithstanding, to our best knowledge, there is a lack of evidence to support the relationship between oral health and wellbeing. Aim: The scoping review aims to examine current definitions of health and wellbeing as well as map the evidence to examine the relationship between oral health and wellbeing. Methods: The scoping review followed the Preferred Reporting Items for Systematic Reviews Extension for Scoping Review (PRISMA-ScR). A two-phase search strategy was followed because of the unmanageable number of hits returned. The first phase was to identify how well-being was conceptualised in oral health literacy, and the second phase was to search for extracted keywords. The extracted keywords were searched in four databases: PubMed, CINAHL, PsycINFO, and Web of Science. To limit the number of studies to a manageable amount, the search was limited to the open-access studies that have been published in the last five years (from 2018 to 2022). Results: Only eight studies (0.1%) of the 5455 results met the review inclusion criteria. Most of the included studies defined wellbeing based on the hedonic theory. And the Satisfaction with Life Scale is the most used. Although the research results are inconsistent, it has generally been shown that there is a weak or no association between oral health and wellbeing. Interpretation: The review revealed a very important point about how oral health literature uses loose definitions that have significant implications for empirical research. That results in misleading evidence-based conclusions. According to the review results, improving oral health is not a key factor in improving wellbeing. It appears that investing in oral health care to improve wellbeing is not a top priority to tell policymakers about. This does not imply that there should be no investment in oral health care to improve oral health. That could have an indirect link to wellbeing by eliminating the potential oral health-related barriers to quality of life that could represent the foundation of wellbeing. Limitation: Only the most recent five years (2018–2022), peer-reviewed English-language literature, and four electronic databases were included in the search. These restrictions were put in place to keep the volume of literature at a manageable level. This suggests that some significant studies might have been omitted. Furthermore, the study used a definition of wellbeing that is currently being evolved and might not everyone agrees with it. Conclusion: Whilst it is a ubiquitous argument that oral health is related to wellbeing, and this seems logical, there is little empirical evidence to support this claim. This question, therefore, requires much more detailed consideration. Funding: This project was funded by the Ministry of Higher Education and Scientific Research in Libya and Tripoli University.

Keywords: oral health, wellbeing, satisfaction, emotion, quality of life, oral health related quality of life

Procedia PDF Downloads 88
161 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties

Authors: P. T. Collett, M. Kershaw

Abstract:

Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.

Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation

Procedia PDF Downloads 106
160 Preservation and Promotion of Lao Traditional Food as Luangprabang Province Unique Culture and Tradition in Accordance With One District One Product Policy

Authors: Lamphong Volady

Abstract:

The primary purpose of this study was to explore the traditional cuisine (local food) of Luangprabang Province in line with the Lao PDR’s One District One Product Policy. Another purpose of the study was to examine channels used to present local food, reasons to preserve and promote local food, as well as local food preservation and promotion strategies. It also aimed at testing correlation hypotheses whether there is a statistically significant relationship between enjoyment of having local food and willingness to promote local cuisines becoming international cuisines, attractiveness to consume local food, preservation and promotion of local food problems, and local people’s occupations. The Convergent Parallel Mixed Methods were employed in this study. The results of the study showed that several local cuisines were found to be local food of Luangprabang Province, namely Jeow Bon (Chilli dipping suace), Or Lam or aw lahm (stew buffalo skin, herbs, Mai sakaan), Kai Pan (River Weed Dry), Tam Mak Houng Luangprabang (Papaya Salad), Nang (Yam Buffalo Skin Dry), Sai Oor (Sausage), Laap Sin Koay Sai Mar-Keua Pao (Beef Salad with Roasted Eggplants), Orm Born (Taro leaves Stew), Oor Nor Mai (Bamboo Shoot Sausage), Jeow Nam Poo (Pickled Crab Chillies), Mok Dok Kae (steaming or roasting a Dok Kae Wrapp), Nor Sa Wan, Kao Noom Kee Noo, Kao Noom Ba Bin. It also depicted that YouTube, Facebook, and TikTok were multiple social channels or platforms which were found to be used to introduce traditional food as well as television, smartphone, word of mouth, Lao food fairs and other provincial events. The study also found that local food should be preserved and promoted since traditional food is not only ancestral, ancient, traditional, and local cuisines, but it is also wisdom, unique, and national cuisine. The study also found that people feel attracted to consuming local food because local food is delicious, unique, clean, nutritious, non-contaminated and natural. The study showed that lack of funds to produce local food, inadequate draw materials, lack material to store products, insufficient place to produce and lack of related organizations engagement were found to be problems for preserving and promoting traditional food. Finally, the result of the study revealed that there is a statistically significant weak relationship between enjoyment of having local food and willingness to promote local cuisines becoming international cuisines (R²= 4.5%), (p-value <0.001). There is a statistically significant moderate relationship between enjoyment of having local food and attractiveness to consume local food (R²= 7.8%), (p-value <0.001). However, there is a statistically insignificant relationship between enjoyment of having local food and preservation and promotion of local food problems (R²= 1.8%), (p-value = 0.086). It was found that there is a statistically insignificant relationship between enjoyment of having local food and local people’s occupations (R²= 0.0%), (p-value = 0.929).

Keywords: local food, preservation, promotion, traditional food, cuisines

Procedia PDF Downloads 48
159 Gender Gap in Returns to Social Entrepreneurship

Authors: Saul Estrin, Ute Stephan, Suncica Vujic

Abstract:

Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.

Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences

Procedia PDF Downloads 221
158 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference

Authors: Nasser S. Shebka

Abstract:

Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.

Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities

Procedia PDF Downloads 66
157 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India

Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari

Abstract:

The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.

Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya

Procedia PDF Downloads 44
156 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis

Authors: Christi Jackson, Chuka Onaga

Abstract:

Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.

Keywords: drug-resistance, HIV, line probe assay, tuberculosis

Procedia PDF Downloads 140
155 The Sustained Utility of Japan's Human Security Policy

Authors: Maria Thaemar Tana

Abstract:

The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.

Keywords: human security, foreign policy, neoclassical realism, peace-building

Procedia PDF Downloads 111
154 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes

Authors: Sofia Lazareva, Artem Smolentsev

Abstract:

Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.

Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state

Procedia PDF Downloads 651
153 Seismotectonics and Seismology the North of Algeria

Authors: Djeddi Mabrouk

Abstract:

The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).

Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth

Procedia PDF Downloads 381
152 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators

Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari

Abstract:

NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.

Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force

Procedia PDF Downloads 175
151 An Assessment of Involuntary Migration in India: Understanding Issues and Challenges

Authors: Rajni Singh, Rakesh Mishra, Mukunda Upadhyay

Abstract:

India is among the nations born out of partition that led to one of the greatest forced migrations that marked the past century. The Indian subcontinent got partitioned into two nation-states, namely India and Pakistan. This led to an unexampled mass displacement of people accounting for about 20 million in the subcontinent as a whole. This exemplifies the socio-political version of displacement, but there are other identified reasons leading to human displacement viz., natural calamities, development projects and people-trafficking and smuggling. Although forced migrations are rare in incidence, they are mostly region-specific and a very less percentage of population appears to be affected by it. However, when this percentage is transcripted in terms of volume, the real impact created by such migration can be realized. Forced migration is thus an issue related to the lives of many people and requires to be addressed with proper intervention. Forced or involuntary migration decimates peoples' assets while taking from them their most basic resources and makes them migrate without planning and intention. This in most cases proves to be a burden on the destination resources. Thus, the question related to their security concerns arise profoundly with regard to the protection and safeguards to these migrants who need help at the place of destination. This brings the human security dimension of forced migration into picture. The present study is an analysis of a sample of 1501 persons by NSSO in India (National Sample Survey Organisation), which identifies three reasons for forced migration- natural disaster, social/political problem and displacement by development projects. It was observed that, of the total forced migrants, about 4/5th comprised of the internally displaced persons. However, there was a huge inflow of such migrants to the country from across the borders also, the major contributing countries being Bangladesh, Pakistan, Sri Lanka, Gulf countries and Nepal. Among the three reasons for involuntary migration, social and political problem is the most prominent in displacing huge masses of population; it is also the reason where the share of international migrants to that of internally displaced is higher compared to the other two factors /reasons. Second to political and social problems, natural calamities displaced a high portion of the involuntary migrants. The present paper examines the factors which increase people's vulnerability to forced migration. On perusing the background characteristics of the migrants it was seen that those who were economically weak and socially fragile are more susceptible to migration. Therefore, getting an insight about this fragile group of society is required so that government policies can benefit these in the most efficient and targeted manner.

Keywords: involuntary migration, displacement, natural disaster, social and political problem

Procedia PDF Downloads 332
150 Mycotoxin Bioavailability in Sparus Aurata Muscle After Human Digestion and Intestinal Transport (Caco-2/HT-29 Cells) Simulation

Authors: Cheila Pereira, Sara C. Cunha, Miguel A. Faria, José O. Fernandes

Abstract:

The increasing world population brings several concerns, one of which is food security and sustainability. To meet this challenge, aquaculture, the farming of aquatic animals and plants, including fish, mollusks, bivalves, and algae, has experienced sustained growth and development in recent years. Recent advances in this industry have focused on reducing its economic and environmental costs, for example, the substitution of protein sources in fish feed. Plant-based proteins are now a common approach, and while it is a greener alternative to animal-based proteins, there are some disadvantages, such as their putative content and intoxicants such as mycotoxins. These are naturally occurring plant contaminants, and their exposure in fish can cause health problems, stunted growth or even death, resulting in economic losses for the producers and health concerns for the consumers. Different works have demonstrated the presence of both AFB1 (aflatoxin B1) and ENNB1 (enniatin B1) in fish feed and their capacity to be absorbed and bioaccumulate in the fish organism after digestion, further reaching humans through fish ingestion. The aim of this work was to evaluate the bioaccessibility of both mycotoxins in samples of Sparus aurata muscle using a static digestion model based on the INFOGEST protocol. The samples were subjected to different cooking procedures – raw, grilled and fried – and different seasonings – none, thyme and ginger – in order to evaluate their potential reduction effect on mycotoxins bioaccessibility, followed by the evaluation of the intestinal transport of both compounds with an in vitro cell model composed of Caco-2/HT-29 co-culture monolayers, simulating the human intestinal epithelium. The bioaccessible fractions obtained in the digestion studies were used in the transport studies for a more realistic approach to bioavailability evaluation. Results demonstrated the effect of the use of different cooking procedures and seasoning on the toxin's bioavailability. Sparus aurata was chosen in this study for its large production in aquaculture and high consumption in Europe. Also, with the continued evolution of fish farming practices and more common usage of novel feed ingredients based on plants, there is a growing concern about less studied contaminants in aquaculture and their consequences for human health. In pair with greener advances in this industry, there is a convergence towards alternative research methods, such as in vitro applications. In the case of bioavailability studies, both in vitro digestion protocols and intestinal transport assessment are excellent alternatives to in vivo studies. These methods provide fast, reliable and comparable results without ethical restraints.

Keywords: AFB1, aquaculture, bioaccessibility, ENNB1, intestinal transport.

Procedia PDF Downloads 36
149 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context

Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez

Abstract:

Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.

Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes

Procedia PDF Downloads 37
148 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 354
147 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka

Authors: W. A. Nalaka Wijesooriya

Abstract:

The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.

Keywords: conduct, performance, structure (SCP), rice millers

Procedia PDF Downloads 307
146 The Awareness of Cardiovascular Diseases among General Population in Western Regions of Saudi Arabia

Authors: Ali Saeed Alghamdi, Basel Mazen Alsolami, Basel Saeed Alghamdi, Muhanad Saleh Alzahrani Alamri, Salman Anwar Thabet, Abdulhalim J. Kinsara

Abstract:

Objectives: This study measures the knowledge of the cardiovascular disease among the general population in western regions of Saudi Arabia, and it aimed to increase the level of awareness about cardiovascular diseases among the general population by providing an awareness lecture that included information about the risk factors, major symptoms, and prevention of cardiovascular diseases. The lecture has been attached at the end of the questionnaire. Setting: This study was conducted through an online questionnaire that included our aim and main objectives that targeted the general population in the Western regions of Saudi Arabia (Makkah and Madinah regions). Participants: This study participants were 460 collected through an online questionnaire. Methods: All Saudi citizens and residents who live in the western region of Saudi Arabia aged 18 years and above will be invited to participate voluntarily. A pre-structured questionnaire was designed to collect data on age, gender, marital status, education level, occupation, lifestyle habits, and history of heart diseases, with cardiac symptoms and risk factors sections. Results: The majority of respondents were females (74.8%) and Saudis. The knowledge about cardiovascular disease risk factors was weak. Only (18.5%) scores an excellent response regarding risk factors awareness. Lack of exercise, stress, and obesity were the most known risk factors. Regarding cardiovascular disease symptoms, chest pain scores the highest symptom (87.6%) among other symptoms like dyspnea, syncope, and excessive sweating. Participants revealed a poor awareness regarding cardiovascular disease symptoms also (0.9%). However, preventable factors for cardiovascular diseases were more knowledgeable than others categories in this study (60% fall into excellent knowledge). Smoking cessation, normal cholesterol level, and normal blood pressure score the highest preventable methods (92.2%), (88.6%), and (78.7%) respectively. 83.7% of the participant have attended the awareness lecture, and 99 of the attendees reported that the lecture increased their knowledge about cardiovascular disease. Conclusion: This study discussed the level of community awareness of cardiovascular disease in terms of symptoms, risk factors, and protective factors. We found a huge lack of the participant's level of knowledge about the disease and how to prevent it. Moreover, we measure the prevalence of the comorbidities among our participants (diabetes, hypertension, hypercholesterolemia/ hypertriglyceridemia) and their extent of adherence to their medication. In conclusion, this study not only demonstrates awareness of cardiovascular disease risk factors, symptoms, management, and the association between each domain but also provides educational material. Further educational material and campaigns are required to increase awareness and knowledge about cardiovascular diseases.

Keywords: awareness, cardiovascular diseases, education, prevention, risk factors

Procedia PDF Downloads 107
145 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 105