Search results for: computer system
1132 Telepsychiatry for Asian Americans
Authors: Jami Wang, Brian Kao, Davin Agustines
Abstract:
COVID-19 highlighted the active discrimination against the Asian American population easily seen through media, social tension, and increased crimes against the specific population. It is well known that long-term racism can also have a large impact on both emotional and psychological well-being. However, the healthcare disparity during this time also revealed how the Asian American community lacked the research data, political support, and medical infrastructure for this particular population. During a time when Asian American fear for safety with decreasing mental health, telepsychiatry is particularly promising. COVID-19 demonstrated how well psychiatry could integrate with telemedicine, with psychiatry being the second most utilized telemedicine visits. However, the Asian American community did not utilize the telepsychiatry resources as much as other groups. Because of this, we wanted to understand why the patient population who was affected the most by COVID-19 mentally did not seek out care. To do this, we decided to study the top top telepsychiatry platforms. The current top telepsychiatry companies in the United States include Teladoc and BetterHelp. In the Teladoc mental health sector, they only had 4 available languages (English, Spanish, French, and Danis,) with none of them being an Asian language. In a similar manner, Teladoc’s top competitor in the telepsychiatry space, BetterHelp, only listed a total of 3 Asian languages, including Mandarin, Japanese, and Malaysian. However, this is still a short list considering they have over 20 languages available. The shortage of available physicians that speak multiple languages is concerning, as it could be difficult for the Asian American community to relate with. There are limited mental health resources that cater to their likely cultural needs, further exacerbating the structural racism and institutional barriers to appropriate care. It is important to note that these companies do provide interpreters to comply with the nondiscrimination and language assistance federal law. However, interactions with an interpreter are not only more time-consuming but also less personal than talking directly with a physician. Psychiatry is the field that emphasizes interpersonal relationships. The trust between a physician and the patient is critical in developing patient rapport to guide in better understanding the clinical picture and treating the patient appropriately. The language barrier creates an additional barrier between the physician and patient. Because Asian Americans are one of the largest growing patient population bases, these telehealth companies have much to gain by catering to the Asian American market. Without providing adequate access to bilingual and bicultural physicians, the current system will only further exacerbate the growing disparity. The healthcare community and telehealth companies need to recognize that the Asian American population is a severely underserved population in mental health and has much to gain from telepsychiatry. The lack of language is one of many reasons why there is a disparity for Asian Americans in the mental health space.Keywords: telemedicine, psychiatry, Asian American, disparity
Procedia PDF Downloads 1061131 Development and Testing of Health Literacy Scales for Chinese Primary and Secondary School Students
Authors: Jiayue Guo, Lili You
Abstract:
Background: Children and adolescent health are crucial for both personal well-being and the nation's future health landscape. Health Literacy (HL) is important in enabling adolescents to self-manage their health, a fundamental step towards health empowerment. However, there are limited tools for assessing HL among elementary and junior high school students. This study aims to construct and validate a test-based HL scale for Chinese students, offering a scientific reference for cross-cultural HL tool development. Methods: We conducted a cross-sectional online survey. Participants were recruited from a stratified cluster random sampling method, a total of 4189 Chinese in-school primary and secondary students. The development of the scale was completed by defining the concept of HL, establishing the item indicator system, screening items (7 health content dimensions), and evaluating reliability and validity. Delphi method expert consultation was used to screen items, the Rasch model was conducted for quality analysis, and Cronbach’s alpha coefficient was used to examine the internal consistency. Results: We developed four versions of the HL scale, each with a total score of 100, encompassing seven key health areas: hygiene, nutrition, physical activity, mental health, disease prevention, safety awareness, and digital health literacy. Each version measures four dimensions of health competencies: knowledge, skills, motivation, and behavior. After the second round of expert consultation, the average importance score of each item by experts is 4.5–5.0, and the coefficient of variation is 0.000–0.174. The knowledge and skills dimensions are judgment-based and multiple-choice questions, with the Rasch model confirming unidimensionality at a 5.7% residual variance. The behavioral and motivational dimensions, measured with scale-type items, demonstrated internal consistency via Cronbach's alpha and strong inter-item correlation with KMO values of 0.924 and 0.787, respectively. Bartlett's test of sphericity, with p-values <0.001, further substantiates the scale's reliability. Conclusions: The new test-based scale, designed to evaluate competencies within a multifaceted framework, aligns with current international adolescent literacy theories and China's health education policies, focusing not only on knowledge acquisition but also on the application of health-related thinking and behaviors. The scale can be used as a comprehensive tool for HL evaluation and a reference for other countries.Keywords: adolescent health, Chinese, health literacy, rasch model, scale development
Procedia PDF Downloads 301130 Orange Leaves and Rice Straw on Methane Emission and Milk Production in Murciano-Granadina Dairy Goat Diet
Authors: Tamara Romero, Manuel Romero-Huelva, Jose V. Segarra, Jose Castro, Carlos Fernandez
Abstract:
Many foods resulting from processing and manufacturing end up as waste, most of which is burned, dumped into landfills or used as compost, which leads to wasted resources, and environmental problems due to unsuitable disposal. Using residues of the crop and food processing industries to feed livestock has the advantage to obviating the need for costly waste management programs. The main residue generated in citrus cultivations and rice crop are pruning waste and rice straw, respectively. Within Spain, the Valencian Community is one of the world's oldest citrus and rice production areas. The objective of this experiment found out the effects of including orange leaves and rice straw as ingredients in the concentrate diets of goats, on milk production and methane (CH₄) emissions. Ten Murciano-Granadina dairy goats (45 kg of body weight, on average) in mid-lactation were selected in a crossover design experiment, where each goat received two treatments in 2 periods. Both groups were fed with 1.7 kg pelleted mixed ration; one group (n= 5) was a control (C) and the other group (n= 5) used orange leaves and rice straw (OR). The forage was alfalfa hay, and it was the same for the two groups (1 kg of alfalfa was offered by goat and day). The diets employed to achieve the requirements during lactation period for caprine livestock. The goats were allocated to individual metabolism cages. After 14 days of adaptation, feed intake and milk yield were recorded daily over a 5 days period. Physico-chemical parameters and somatic cell count in milk samples were determined. Then, gas exchange measurements were recorded individually by an open-circuit indirect calorimetry system using a head box. The data were analyzed by mixed model with diet and digestibility as fixed effect and goat as random effect. No differences were found for dry matter intake (2.23 kg/d, on average). Higher milk yield was found for C diet than OR (2.3 vs. 2.1 kg/goat and day, respectively) and, greater milk fat content was observed for OR than C (6.5 vs. 5.5%, respectively). The cheese extract was also greater in OR than C (10.7 vs. 9.6%). Goats fed OR diet produced significantly fewer CH₄ emissions than C diet (27 vs. 30 g/d, respectively). These preliminary results (LIFE Project LOWCARBON FEED LIFE/CCM/ES/000088) suggested that the use of these waste by-products was effective in reducing CH₄ emission without detrimental effect on milk yield.Keywords: agricultural waste, goat, milk production, methane emission
Procedia PDF Downloads 1491129 How to Talk about It without Talking about It: Cognitive Processing Therapy Offers Trauma Symptom Relief without Violating Cultural Norms
Authors: Anne Giles
Abstract:
Humans naturally wish they could forget traumatic experiences. To help prevent future harm, however, the human brain has evolved to retain data about experiences of threat, alarm, or violation. When given compassionate support and assistance with thinking helpfully and realistically about traumatic events, most people can adjust to experiencing hardships, albeit with residual sad, unfortunate memories. Persistent, recurrent, intrusive memories, difficulty sleeping, emotion dysregulation, and avoidance of reminders, however, may be symptoms of Post-traumatic Stress Disorder (PTSD). Brain scans show that PTSD affects brain functioning. We currently have no physical means of restoring the system of brain structures and functions involved with PTSD. Medications may ease some symptoms but not others. However, forms of "talk therapy" with cognitive components have been found by researchers to reduce, even resolve, a broad spectrum of trauma symptoms. Many cultures have taboos against talking about hardships. Individuals may present themselves to mental health care professionals with severe, disabling trauma symptoms but, because of cultural norms, be unable to speak about them. In China, for example, relationship expectations may include the belief, "Bad things happening in the family should stay in the family (jiāchǒu bùkě wàiyán 家丑不可外扬)." The concept of "family (jiā 家)" may include partnerships, close and extended families, communities, companies, and the nation itself. In contrast to many trauma therapies, Cognitive Processing Therapy (CPT) for Post-traumatic Stress Disorder asks its participants to focus not on "what" happened but on "why" they think the trauma(s) occurred. The question "why" activates and exercises cognitive functioning. Brain scans of individuals with PTSD reveal executive functioning portions of the brain inadequately active, with emotion centers overly active. CPT conceptualizes PTSD as a network of cognitive distortions that keep an individual "stuck" in this under-functioning and over-functioning dynamic. Through asking participants forms of the question "why," plus offering a protocol for examining answers and relinquishing unhelpful beliefs, CPT assists individuals in consciously reactivating the cognitive, executive functions of their brains, thus restoring normal functioning and reducing distressing trauma symptoms. The culturally sensitive components of CPT that allow people to "talk about it without talking about it" may offer the possibility for worldwide relief from symptoms of trauma.Keywords: cognitive processing therapy (CPT), cultural norms, post-traumatic stress disorder (PTSD), trauma recovery
Procedia PDF Downloads 2161128 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era
Authors: Anette Alén
Abstract:
Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence
Procedia PDF Downloads 341127 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 1511126 Bruch’s Membrane Opening in High Myopia and Its Correlation with Axial Length
Authors: Sanjeeb Kumar Mishra, Aartee Jha, Madhu Thapa, Pragati Gautam
Abstract:
Introduction: High myopia has become a matter of global concern as it is a major risk factor for glaucoma. Various optic nerve head changes occur in high myopia over time. This might lead to difficulty in detecting pathologies associated with high myopia through conventional funduscopy examinations only. Bruch’s Membrane Opening (Area and Minimum Rim Width) is considered an anatomically more accurate and reliable landmark than the conventional clinical disc margin. Study Design: It was a hospital based cross-sectional and non-interventional type of study. Purpose: The purpose of our study was to measure Bruch’s Membrane Opening (area and Minimum Rim Width) in high myopic eyes and correlate it with axial length. Methods: A cross-sectional study was conducted at B.P Koirala Lions Center for Ophthalmic Studies, a tertiary-level eye center in Nepal. 80 eyes of 40 subjects (40% male and 60% female) aged 18-35 years with high myopia (Spherical Equivalent (SE) ≥ -6D) were taken as cases. Among them, RE of 39 and LE of 34 myopic subjects were included in the study. Spectral Domain-Optical Coherence Tomography of both the eyes of myopic patients was performed using Glaucoma Module Premiere Edition (GMPE) with Anatomic Positioning System (APS) to measure Bruch’s Membrane Opening (Area and Minimum Rim Width). Axial length in myopic patients was measured using Partial Coherence Interferometry (IOL Master). Results: Among 40 myopic subjects, 16 (40%) were males, whereas 24 (60%) were females. The mean age of myopic subjects was 24.64 ± 5.10 years, with minimum and maximum ages of 18 years and 35 years, respectively. The mean BMO area was 2.28 0.48 mm² in right eye and 2.15 0.59 mm² in left eye. BMO area in high myopic patient was significantly correlated with axial length. The correlation analysis of BMO area with axial length in RE and LE was found to be statistically significant at (r=0.465, p<0.003) and (r=0.374, p< 0.029), respectively. Likewise, the mean BMO-MRW was 325.69 ± 96µm in right eye and 339.20 ± 79.50µm in left eye. There was a significant correlation of BMO-MRW with axial length in both the eyes of myopic subjects. Moreover, a significant negative correlation of Inferior temporal, Nasal, and Inferior nasal quadrants (p<0.05) of BMO-MRW of right eye was found with axial length of right eye, whereas all the BMO-MRW quadrants of left eye were negatively correlated (p<0.05) with axial length in left eye. No significant differences were found between right eye and left eye on comparing means of refractive error, axial length, BMO area, and BMO-MRW. Conclusion: From this study, it can be concluded that BMO area enlarges in high myopia with an increase in axial length. Additionally, BMO-MRW thinning occurs along with the BMO enlargement and increases with axial length. There were no significant differences in refractive error, axial length, BMO area, and BMO-MRW between right eye and left eye.Keywords: high myopia, Bruch’s membrane opening, Bruch’s membrane opening minimum rim width, spectral domain optical coherence tomography
Procedia PDF Downloads 231125 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5
Abstract:
Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is the generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups, and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contains four obvious stages, and the main decomposition reaction occurred in the range of 200-600°C. The Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2, and 3 were in the range of 6.67-20.37 kJ for SS; 1.51-6.87 kJ for HZSM5; and 2.29-9.17 kJ for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1, and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with AC and HZSM5 were in the total range of C4-C17, with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds, while with the presence of AC and HZSM5 dropped to 13.02% and 7.3%, respectively. Meanwhile, the generation of benzene, toluene, and xylene (BTX) compounds was significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR, and TGA techniques. Overall, this research demonstrated AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.Keywords: catalytic pyrolysis, sewage sludge, activated char, HZSM5, bio-oil
Procedia PDF Downloads 1801124 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector
Authors: Rawan K. Al-Duaij, Salem A. Al-Salem
Abstract:
Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.Keywords: adjustment orders, data analysis, oil and gas sector, systems management
Procedia PDF Downloads 1671123 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 1021122 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips
Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley
Abstract:
French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.Keywords: french fries, mechanical properties, PEF, potatoes
Procedia PDF Downloads 2361121 Extraction and Electrochemical Behaviors of Au(III) using Phosphonium-Based Ionic Liquids
Authors: Kyohei Yoshino, Masahiko Matsumiya, Yuji Sasaki
Abstract:
Recently, studies have been conducted on Au(III) extraction using ionic liquids (ILs) as extractants or diluents. ILs such as piperidinium, pyrrolidinium, and pyridinium have been studied as extractants for noble metal extractions. Furthermore, the polarity, hydrophobicity, and solvent miscibility of these ILs can be adjusted depending on their intended use. Therefore, the unique properties of ILs make them functional extraction media. The extraction mechanism of Au(III) using phosphonium-based ILs and relevant thermodynamic studies are yet to be reported. In the present work, we focused on the mechanism of Au(III) extraction and related thermodynamic analyses using phosphonium-based ILs. Triethyl-n-pentyl, triethyl-n-octyl, and triethyl-n-dodecyl phosphonium bis(trifluoromethyl-sulfonyl)amide, [P₂₂₂ₓ][NTf₂], (X = 5, 8, and 12) were investigated for Au(III) extraction. The IL–Au complex was identified as [P₂₂₂₅][AuCl₄] using UV–Vis–NIR and Raman spectroscopic analyses. The extraction behavior of Au(III) was investigated with a change in the [P₂₂₂ₓ][NTf₂]IL concentration from 1.0 × 10–4 to 1.0 × 10–1 mol dm−3. The results indicate that Au(III) can be easily extracted by the anion-exchange reaction in the [P₂₂₂ₓ][NTf₂]IL. The slope range 0.96–1.01 on the plot of log D vs log[P₂₂₂ₓ][NTf2]IL indicates the association of one mole of IL with one mole of [AuCl4−] during extraction. Consequently, [P₂₂₂ₓ][NTf₂] is an anion-exchange extractant for the extraction of Au(III) in the form of anions from chloride media. Thus, this type of phosphonium-based IL proceeds via an anion exchange reaction with Au(III). In order to evaluate the thermodynamic parameters on the Au(III) extraction, the equilibrium constant (logKₑₓ’) was determined from the temperature dependence. The plot of the natural logarithm of Kₑₓ’ vs the inverse of the absolute temperature (T–1) yields a slope proportional to the enthalpy (ΔH). By plotting T–1 vs lnKₑₓ’, a line with a slope range 1.129–1.421 was obtained. Thus, the result indicated that the extraction reaction of Au(III) using the [P₂₂₂ₓ][NTf₂]IL (X=5, 8, and 12) was exothermic (ΔH=-9.39〜-11.81 kJ mol-1). The negative value of TΔS (-4.20〜-5.27 kJ mol-1) indicates that microscopic randomness is preferred in the [P₂₂₂₅][NTf₂]IL extraction system over [P₂₂₂₁₂][NTf₂]IL. The total negative alternation in Gibbs energy (-5.19〜-6.55 kJ mol-1) for the extraction reaction would thus be relatively influenced by the TΔS value on the number of carbon atoms in the alkyl side length, even if the efficiency of ΔH is significantly influenced by the total negative alternations in Gibbs energy. Electrochemical analysis revealed that extracted Au(III) can be reduced in two steps: (i) Au(III)/Au(I) and (ii) Au(I)/Au(0). The diffusion coefficients of the extracted Au(III) species in [P₂₂₂ₓ][NTf₂] (X = 5, 8, and 12) were evaluated from 323 to 373 K using semi-integral and semi-differential analyses. Because of the viscosity of the IL medium, the diffusion coefficient of the extracted Au(III) increases with increasing alkyl chain length. The 4f7/2 spectrum based on X-ray photoelectron spectroscopy revealed that the Au electrodeposits obtained after 10 cycles of continuous extraction and electrodeposition were in the metallic state.Keywords: au(III), electrodeposition, phosphonium-based ionic liquids, solvent extraction
Procedia PDF Downloads 1081120 Relationship between Structure of Some Nitroaromatic Pollutants and Their Degradation Kinetic Parameters in UV-VIS/TIO2 System
Authors: I. Nitoi, P. Oancea, M. Raileanu, M. Crisan, L. Constantin, I. Cristea
Abstract:
Hazardous organic compounds like nitroaromatics are frequently found in chemical and petroleum industries discharged effluents. Due to their bio-refractory character and high chemical stability cannot be efficiently removed by classical biological or physical-chemical treatment processes. In the past decades, semiconductor photocatalysis has been frequently applied for the advanced degradation of toxic pollutants. Among various semiconductors titania was a widely studied photocatalyst, due to its chemical inertness, low cost, photostability and nontoxicity. In order to improve optical absorption and photocatalytic activity of TiO2 many attempts have been made, one feasible approach consists of doping oxide semiconductor with metal. The degradation of dinitrobenzene (DNB) and dinitrotoluene (DNT) from aqueous solution under UVA-VIS irradiation using heavy metal (0.5% Fe, 1%Co, 1%Ni ) doped titania was investigated. The photodegradation experiments were carried out using a Heraeus laboratory scale UV-VIS reactor equipped with a medium-pressure mercury lamp which emits in the range: 320-500 nm. Solutions with (0.34-3.14) x 10-4 M pollutant content were photo-oxidized in the following working conditions: pH = 5-9; photocatalyst dose = 200 mg/L; irradiation time = 30 – 240 minutes. Prior to irradiation, the photocatalyst powder was added to the samples, and solutions were bubbled with air (50 L/hour), in the dark, for 30 min. Dopant type, pH, structure and initial pollutant concentration influence on the degradation efficiency were evaluated in order to set up the optimal working conditions which assure substrate advanced degradation. The kinetics of nitroaromatics degradation and organic nitrogen mineralization was assessed and pseudo-first order rate constants were calculated. Fe doped photocatalyst with lowest metal content (0.5 wt.%) showed a considerable better behaviour in respect to pollutant degradation than Co and Ni (1wt.%) doped titania catalysts. For the same working conditions, degradation efficiency was higher for DNT than DNB in accordance with their calculated adsobance constants (Kad), taking into account that degradation process occurs on catalyst surface following a Langmuir-Hinshalwood model. The presence of methyl group in the structure of DNT allows its degradation by oxidative and reductive pathways, while DNB is converted only by reductive route, which also explain the highest DNT degradation efficiency. For highest pollutant concentration tested (3 x 10-4 M), optimum working conditions (0.5 wt.% Fe doped –TiO2 loading of 200 mg/L, pH=7 and 240 min. irradiation time) assures advanced nitroaromatics degradation (ηDNB=89%, ηDNT=94%) and organic nitrogen mineralization (ηDNB=44%, ηDNT=47%).Keywords: hazardous organic compounds, irradiation, nitroaromatics, photocatalysis
Procedia PDF Downloads 3171119 Determine Causal Factors Affecting the Responsiveness and Productivity of Non-Governmental Universities
Authors: Davoud Maleki
Abstract:
Today, education and investment in human capital is a long-term investment without which the economy will be stagnant Stayed. Higher education represents a type of investment in human resources by providing and improving knowledge, skills and Attitudes help economic development. Providing efficient human resources by increasing the efficiency and productivity of people and on the other hand with Expanding the boundaries of knowledge and technology and promoting technology such as the responsibility of training human resources and increasing productivity and efficiency in High specialized levels are the responsibility of universities. Therefore, the university plays an infrastructural role in economic development and growth because education by creating skills and expertise in people and improving their ability.In recent decades, Iran's higher education system has been faced with many problems, therefore, scholars have looked for it is to identify and validate the causal factors affecting the responsiveness and productivity of non-governmental universities. The data in the qualitative part is the result of semi-structured interviews with 25 senior and middle managers working in the units It was Islamic Azad University of Tehran province, which was selected by theoretical sampling method. In data analysis, stepwise method and Analytical techniques of Strauss and Corbin (1992) were used. After determining the central category (answering for the sake of the beneficiaries) and using it in order to bring the categories, expressions and ideas that express the relationships between the main categories and In the end, six main categories were identified as causal factors affecting the university's responsiveness and productivity.They are: 1- Scientism 2- Human resources 3- Creating motivation in the university 4- Development based on needs assessment 5- Teaching process and Learning 6- University quality evaluation. In order to validate the response model obtained from the qualitative stage, a questionnaire The questionnaire was prepared and the answers of 146 students of Master's degree and Doctorate of Islamic Azad University located in Tehran province were received. Quantitative data in the form of descriptive data analysis, first and second stage factor analysis using SPSS and Amos23 software were analyzed. The findings of the research indicated the relationship between the central category and the causal factors affecting the response The results of the model test in the quantitative stage confirmed the generality of the conceptual model.Keywords: accountability, productivity, non-governmental, universities, foundation data theory
Procedia PDF Downloads 621118 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems
Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm
Abstract:
Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa
Procedia PDF Downloads 3331117 Quantification of Lawsone and Adulterants in Commercial Henna Products
Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen
Abstract:
The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone
Procedia PDF Downloads 4601116 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1411115 A Report of 5-Months-Old Baby with Balanced Chromosomal Rearrangements along with Phenotypic Abnormalities
Authors: Mohit Kumar, Beklashwar Salona, Shiv Murti, Mukesh Singh
Abstract:
We report here a case of five-months old male baby, born as second child of non-consanguineous parents with no considerable history of genetic abnormality which was referred to our cytogenetic laboratory for chromosomal analysis. Physical dysmorphic facial features including mongoloid face, cleft palate, simian crease, and developmental delay were observed. We present this case with unique balanced autosomal translocation of t(3;10)(p21;p13). The risk of phenotypic abnormalities based on de novo balanced translocation was estimated to be 7%. The association of balanced chromosomal rearrangement with Down syndrome features such as multiple congenital anomalies, facial dysmorphism and congenital heart anomalies are very rare in a 5-months old male child. Trisomy-21 is not uncommon in chromosomal abnormality with the birth defect and balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion (RSA). Two ml heparinized peripheral blood cells cultured in RPMI-1640 for 72 hours supplemented with 20% fetal bovine serum, phytohemagglutinin (PHA), and antibiotics were used for chromosomal analysis. A total 30 metaphases images were captured using Olympus-BX51 microscope and analyzed using Bio-view karyotyping software through GTG-banding (G bands by trypsin and Giemsa) according to International System for Human Cytogenetic Nomenclature 2016. The results showed balanced translocation between short arm of chromosome # 3 and short arm of chromosome # 10. The karyotype of the child was found to be 46,XY,t(3;10)(p21; p13). Chromosomal abnormalities are one of the major causes of birth defect in new born babies. Also, balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion. The index case presented with dysmorphic facial features and had a balanced translocation 46,XY,t(3;10)(p21;p13). This translocation with break points at (p21; p13) has not been reported in the literature in a child with facial dysmorphism. To the best of our knowledge, this is the first report of novel balanced translocation t(3;10) with break points in a child with dysmorphic features. We found balanced chromosomal translocation instead of any trisomy or unbalanced aberrations along with some phenotypic abnormalities. Therefore, we suggest that such novel balanced translocation with abnormal phenotype should be reported in order to enable the pathologist, pediatrician, and gynecologist to have a better insight into the intricacies of chromosomal abnormalities and their associated phenotypic features. We hypothesized that dysmorphic features as seen in this case may be the result of change in the pattern of genes located at the breakpoint area in balanced translocations or may be due to deletion or mutation of genes located on the p-arm of chromosome # 3 and p-arm of chromosome # 10.Keywords: balanced translocation, karyotyping, phenotypic abnormalities, facial dimorphisms
Procedia PDF Downloads 2101114 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 481113 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 1491112 The Effects of Chamomile on Serum Levels of Inflammatory Indexes to a Bout of Eccentric Exercise in Young Women
Authors: K. Azadeh, M. Ghasemi, S. Fazelifar
Abstract:
Aim: Changes in stress hormones can be modify response of immune system. Cortisol as the most important body corticosteroid is anti-inflammatory and immunosuppressive hormone. Normal levels of cortisol in humans has fluctuated during the day, In other words, cortisol is released periodically, and regulate through the release of ACTH circadian rhythm in every day. Therefore, the aim of this study was to determine the effects of Chamomile on serum levels of inflammatory indexes to a bout of eccentric exercise in young women. Methodology: 32 women were randomly divided into 4 groups: high dose of Chamomile, low dose of Chamomile, ibuprofen and placebo group. Eccentric exercise included 5 set and rest period between sets was 1 minute. For this purpose, subjects warm up 10 min and then done eccentric exercise. Each participant completed 15 repetitions with optional 20 kg weight or until can’t continue moving. When the subject was no longer able to continue to move, immediately decreased 5 kg from the weight and the protocol continued until cause exhaustion or complete 15 repetitions. Also, subjects received specified amount of ibuprofen and Chamomile capsules in target groups. Blood samples in 6 stages (pre of starting pill, pre of exercise protocol, 4, 24, 48 and 72 hours after eccentric exercise) was obtained. The levels of cortisol and adrenocorticotropic hormone levels were measured by ELISA way. K-S test to determine the normality of the data and analysis of variance for repeated measures was used to analyze the data. A significant difference in the p < 0/05 accepted. Results: The results showed that Individual characteristics including height, weight, age and body mass index were not significantly different among the four groups. Analyze of data showed that cortisol and ACTH basic levels significantly decreased after supplementation consumption, but then gradually significantly increased in all stages of post exercise. In High dose of Chamomile group, increasing tendency of post exercise somewhat less than other groups, but not to a significant level. The inter-group analysis results indicate that time effect had a significant impact in different stages of the groups. Conclusion: The results of this study, one session of eccentric exercise increased cortisol and ACTH hormone. The results represent the effect of high dose of Chamomile in the prevention and reduction of increased stress hormone levels. As regards use of medicinal plants and ibuprofen as a pain medication and inflammation has spread among athletes and non-athletes, the results of this research can provide information about the advantages and disadvantages of using medicinal plants and ibuprofen.Keywords: chamomile, inflammatory indexes, eccentric exercise, young girls
Procedia PDF Downloads 4181111 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation
Authors: Austin Vander Wel, Katherin Vargas Henao
Abstract:
Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation
Procedia PDF Downloads 731110 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs
Authors: Raj Kumar, Prem Felix Siril
Abstract:
The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability
Procedia PDF Downloads 3131109 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure
Procedia PDF Downloads 5201108 Traditional Medicine and Islamic Holistic Approach in Palliative Care Management of Terminal Illpatient of Cancer
Authors: Mohammed Khalil Ur Rahman, Mohammed Alsharon, Arshad Muktar, Zahid Shaik
Abstract:
Any ailment can go into terminal stages, cancer being one such disease which is many times detected in latent stages. Cancer is often characterized by constitutional symptoms which are agonizing in nature which disturbs patients and their family as well. In order to relieve such intolerable symptoms treatment modality employed is known to be ‘Palliative Care’. The goal of palliative care is to enhance patient’s quality of life by relieving or rather reducing the distressing symptoms of patients such as pain, nausea/ vomiting, anorexia/loss of appetite, excessive salivation, mouth ulcers, weight loss, constipation, oral thrush, emaciation etc. which are due to the effect of disease or due to the undergoing treatment such as chemotherapy, radiation etc. Ayurveda and Unani as well as other traditional medicines is getting more and more international attention in recent years and Ayurveda and Unani holistic perspective of the disease, it seems that there are many herbs and herbomineral preparation which can be employed in the treatment of malignancy and also in palliative care. Though many of them have yet to be scientifically proved as anti-cancerous but there is definitely a positive lead that some of these medications relieve the agonising symptoms thereby making life of the patient easy. Health is viewed in Islam in a holistic way. One of the names of the Quran is al-shifa' meaning ‘that which heals’ or ‘the restorer of health’ to refer to spiritual, intellectual, psychological, and physical health. The general aim of medical science, according to Islam, is to secure and adopt suitable measures which, with Allah’s permission, help to preserve or restore the health of the human body. Islam motivates the Physician to view the patient as one organism. The patient has physical, social, psychological, and spiritual dimensions that must be considered in synthesis with an integrated, holistic approach. Aims & Objectives: - To suggest herbs which are mentioned in Ayurveda Unani with potential palliative activity in case of Cancer patients. - Most of tibb nabawi [Prophetic Medicine] is preventive medicine and must have been divinely inspired. - Spiritual Aspects of Healing: Prayer, dua, recitation of the Quran - Remembrance of Allah play a central role.Materials & Method: Literary review of the herbs supported with experiential evidence will be discussed. Discussion: On the basis of collected data subject will be discussed in length. Conclusion: Will be presented in paper.Keywords: palliative care, holistic, Ayurvedic and Unani traditional system of medicine, Quran, hadith
Procedia PDF Downloads 3411107 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron
Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava
Abstract:
Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation
Procedia PDF Downloads 1241106 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 2391105 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 2321104 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 2061103 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 29