Search results for: ventilation control system
954 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems
Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh
Abstract:
Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability
Procedia PDF Downloads 423953 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR
Authors: Shadan Dwairi
Abstract:
Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).Keywords: building information modelling, employer information requirements, ontology, web-based, tool
Procedia PDF Downloads 127952 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 96951 Critical Discourse Analysis Approach to the Post-Feminist Representations in Ecommerce Livestreamings of Lipsticks
Authors: Haiyan Huang, Jan Blommaert, Ellen Van Praet
Abstract:
The embrace of neoliberal economic system in China has engendered the entry of global commodity capitalism into domestic Chinese market and ushered in the post-feminism that is closely associated with consumerism from western culture. Chinese women are instilled and thus hold the belief of empowering themselves and expressing their individualism through consumption. To unravel the consumerist ideologies embedded in Li’s discursive practices, we rely on critical discourse analysis (CDA) as our research framework. The data analyses suggest that cosmopolitanism and class are two repeating themes when Li engages in persuading consumerist behaviors from the female audience. Through hints and cues such as “going on business trips”, “traveling abroad”, “international brands” and among others, Li provides the access to and possibility of imagining cosmopolitan and middle class identity for his audience. Such yearning for western culture and global citizen identity also implicates the aspiration for a well-off socioeconomic status, proving that post-feminism in China not only embodies western consumerism but also implicates the struggle of class movement. These defining elements of choice and freedom are well-situated in contemporary Chinese society where women are enjoying more educational and economic independence than before. However a closer examination reveals conflicts between hegemonic discourse of post-feminism and the status quo. First, propagating women’s power through consumption obscure the entrenched gender inequality in China. Philosophies such as employment discrimination, equal payment, education right, etc., the cornerstones of feminism did not exist in China, leading to historical gender issues unsolved. Second, the lengthy broadcastings (which normally last more than 2 hours) featured with big discounts on products beg the question who are the real audience of ecommerce livestreaming. Seemingly addressing to young well-off Chinese females, Li’s discursive practice can be targeting at young but not wealthy girls who aspire to mimic the lifestyle of middle class women. By selling the idea of empowering and constructing identity through consuming beauty products (e.g., lipsticks), capitalists are endeavoring to create the post-feminism illusion and cause anxieties among Chinese females. Through in-depth analyses of hegemonic discourse on ecommerce livestreaming of lipsticks, the paper contributes to a better understanding of post-feminism in contemporary China and meanwhile illustrates the problems Chinese women face in securing power and equality.Keywords: Chinese women, critical discourse analysis, ecommerce livestreaming, post-feminism
Procedia PDF Downloads 126950 Telepsychiatry for Asian Americans
Authors: Jami Wang, Brian Kao, Davin Agustines
Abstract:
COVID-19 highlighted the active discrimination against the Asian American population easily seen through media, social tension, and increased crimes against the specific population. It is well known that long-term racism can also have a large impact on both emotional and psychological well-being. However, the healthcare disparity during this time also revealed how the Asian American community lacked the research data, political support, and medical infrastructure for this particular population. During a time when Asian American fear for safety with decreasing mental health, telepsychiatry is particularly promising. COVID-19 demonstrated how well psychiatry could integrate with telemedicine, with psychiatry being the second most utilized telemedicine visits. However, the Asian American community did not utilize the telepsychiatry resources as much as other groups. Because of this, we wanted to understand why the patient population who was affected the most by COVID-19 mentally did not seek out care. To do this, we decided to study the top top telepsychiatry platforms. The current top telepsychiatry companies in the United States include Teladoc and BetterHelp. In the Teladoc mental health sector, they only had 4 available languages (English, Spanish, French, and Danis,) with none of them being an Asian language. In a similar manner, Teladoc’s top competitor in the telepsychiatry space, BetterHelp, only listed a total of 3 Asian languages, including Mandarin, Japanese, and Malaysian. However, this is still a short list considering they have over 20 languages available. The shortage of available physicians that speak multiple languages is concerning, as it could be difficult for the Asian American community to relate with. There are limited mental health resources that cater to their likely cultural needs, further exacerbating the structural racism and institutional barriers to appropriate care. It is important to note that these companies do provide interpreters to comply with the nondiscrimination and language assistance federal law. However, interactions with an interpreter are not only more time-consuming but also less personal than talking directly with a physician. Psychiatry is the field that emphasizes interpersonal relationships. The trust between a physician and the patient is critical in developing patient rapport to guide in better understanding the clinical picture and treating the patient appropriately. The language barrier creates an additional barrier between the physician and patient. Because Asian Americans are one of the largest growing patient population bases, these telehealth companies have much to gain by catering to the Asian American market. Without providing adequate access to bilingual and bicultural physicians, the current system will only further exacerbate the growing disparity. The healthcare community and telehealth companies need to recognize that the Asian American population is a severely underserved population in mental health and has much to gain from telepsychiatry. The lack of language is one of many reasons why there is a disparity for Asian Americans in the mental health space.Keywords: telemedicine, psychiatry, Asian American, disparity
Procedia PDF Downloads 105949 Development and Testing of Health Literacy Scales for Chinese Primary and Secondary School Students
Authors: Jiayue Guo, Lili You
Abstract:
Background: Children and adolescent health are crucial for both personal well-being and the nation's future health landscape. Health Literacy (HL) is important in enabling adolescents to self-manage their health, a fundamental step towards health empowerment. However, there are limited tools for assessing HL among elementary and junior high school students. This study aims to construct and validate a test-based HL scale for Chinese students, offering a scientific reference for cross-cultural HL tool development. Methods: We conducted a cross-sectional online survey. Participants were recruited from a stratified cluster random sampling method, a total of 4189 Chinese in-school primary and secondary students. The development of the scale was completed by defining the concept of HL, establishing the item indicator system, screening items (7 health content dimensions), and evaluating reliability and validity. Delphi method expert consultation was used to screen items, the Rasch model was conducted for quality analysis, and Cronbach’s alpha coefficient was used to examine the internal consistency. Results: We developed four versions of the HL scale, each with a total score of 100, encompassing seven key health areas: hygiene, nutrition, physical activity, mental health, disease prevention, safety awareness, and digital health literacy. Each version measures four dimensions of health competencies: knowledge, skills, motivation, and behavior. After the second round of expert consultation, the average importance score of each item by experts is 4.5–5.0, and the coefficient of variation is 0.000–0.174. The knowledge and skills dimensions are judgment-based and multiple-choice questions, with the Rasch model confirming unidimensionality at a 5.7% residual variance. The behavioral and motivational dimensions, measured with scale-type items, demonstrated internal consistency via Cronbach's alpha and strong inter-item correlation with KMO values of 0.924 and 0.787, respectively. Bartlett's test of sphericity, with p-values <0.001, further substantiates the scale's reliability. Conclusions: The new test-based scale, designed to evaluate competencies within a multifaceted framework, aligns with current international adolescent literacy theories and China's health education policies, focusing not only on knowledge acquisition but also on the application of health-related thinking and behaviors. The scale can be used as a comprehensive tool for HL evaluation and a reference for other countries.Keywords: adolescent health, Chinese, health literacy, rasch model, scale development
Procedia PDF Downloads 30948 How to Talk about It without Talking about It: Cognitive Processing Therapy Offers Trauma Symptom Relief without Violating Cultural Norms
Authors: Anne Giles
Abstract:
Humans naturally wish they could forget traumatic experiences. To help prevent future harm, however, the human brain has evolved to retain data about experiences of threat, alarm, or violation. When given compassionate support and assistance with thinking helpfully and realistically about traumatic events, most people can adjust to experiencing hardships, albeit with residual sad, unfortunate memories. Persistent, recurrent, intrusive memories, difficulty sleeping, emotion dysregulation, and avoidance of reminders, however, may be symptoms of Post-traumatic Stress Disorder (PTSD). Brain scans show that PTSD affects brain functioning. We currently have no physical means of restoring the system of brain structures and functions involved with PTSD. Medications may ease some symptoms but not others. However, forms of "talk therapy" with cognitive components have been found by researchers to reduce, even resolve, a broad spectrum of trauma symptoms. Many cultures have taboos against talking about hardships. Individuals may present themselves to mental health care professionals with severe, disabling trauma symptoms but, because of cultural norms, be unable to speak about them. In China, for example, relationship expectations may include the belief, "Bad things happening in the family should stay in the family (jiāchǒu bùkě wàiyán 家丑不可外扬)." The concept of "family (jiā 家)" may include partnerships, close and extended families, communities, companies, and the nation itself. In contrast to many trauma therapies, Cognitive Processing Therapy (CPT) for Post-traumatic Stress Disorder asks its participants to focus not on "what" happened but on "why" they think the trauma(s) occurred. The question "why" activates and exercises cognitive functioning. Brain scans of individuals with PTSD reveal executive functioning portions of the brain inadequately active, with emotion centers overly active. CPT conceptualizes PTSD as a network of cognitive distortions that keep an individual "stuck" in this under-functioning and over-functioning dynamic. Through asking participants forms of the question "why," plus offering a protocol for examining answers and relinquishing unhelpful beliefs, CPT assists individuals in consciously reactivating the cognitive, executive functions of their brains, thus restoring normal functioning and reducing distressing trauma symptoms. The culturally sensitive components of CPT that allow people to "talk about it without talking about it" may offer the possibility for worldwide relief from symptoms of trauma.Keywords: cognitive processing therapy (CPT), cultural norms, post-traumatic stress disorder (PTSD), trauma recovery
Procedia PDF Downloads 213947 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era
Authors: Anette Alén
Abstract:
Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence
Procedia PDF Downloads 31946 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 148945 Bruch’s Membrane Opening in High Myopia and Its Correlation with Axial Length
Authors: Sanjeeb Kumar Mishra, Aartee Jha, Madhu Thapa, Pragati Gautam
Abstract:
Introduction: High myopia has become a matter of global concern as it is a major risk factor for glaucoma. Various optic nerve head changes occur in high myopia over time. This might lead to difficulty in detecting pathologies associated with high myopia through conventional funduscopy examinations only. Bruch’s Membrane Opening (Area and Minimum Rim Width) is considered an anatomically more accurate and reliable landmark than the conventional clinical disc margin. Study Design: It was a hospital based cross-sectional and non-interventional type of study. Purpose: The purpose of our study was to measure Bruch’s Membrane Opening (area and Minimum Rim Width) in high myopic eyes and correlate it with axial length. Methods: A cross-sectional study was conducted at B.P Koirala Lions Center for Ophthalmic Studies, a tertiary-level eye center in Nepal. 80 eyes of 40 subjects (40% male and 60% female) aged 18-35 years with high myopia (Spherical Equivalent (SE) ≥ -6D) were taken as cases. Among them, RE of 39 and LE of 34 myopic subjects were included in the study. Spectral Domain-Optical Coherence Tomography of both the eyes of myopic patients was performed using Glaucoma Module Premiere Edition (GMPE) with Anatomic Positioning System (APS) to measure Bruch’s Membrane Opening (Area and Minimum Rim Width). Axial length in myopic patients was measured using Partial Coherence Interferometry (IOL Master). Results: Among 40 myopic subjects, 16 (40%) were males, whereas 24 (60%) were females. The mean age of myopic subjects was 24.64 ± 5.10 years, with minimum and maximum ages of 18 years and 35 years, respectively. The mean BMO area was 2.28 0.48 mm² in right eye and 2.15 0.59 mm² in left eye. BMO area in high myopic patient was significantly correlated with axial length. The correlation analysis of BMO area with axial length in RE and LE was found to be statistically significant at (r=0.465, p<0.003) and (r=0.374, p< 0.029), respectively. Likewise, the mean BMO-MRW was 325.69 ± 96µm in right eye and 339.20 ± 79.50µm in left eye. There was a significant correlation of BMO-MRW with axial length in both the eyes of myopic subjects. Moreover, a significant negative correlation of Inferior temporal, Nasal, and Inferior nasal quadrants (p<0.05) of BMO-MRW of right eye was found with axial length of right eye, whereas all the BMO-MRW quadrants of left eye were negatively correlated (p<0.05) with axial length in left eye. No significant differences were found between right eye and left eye on comparing means of refractive error, axial length, BMO area, and BMO-MRW. Conclusion: From this study, it can be concluded that BMO area enlarges in high myopia with an increase in axial length. Additionally, BMO-MRW thinning occurs along with the BMO enlargement and increases with axial length. There were no significant differences in refractive error, axial length, BMO area, and BMO-MRW between right eye and left eye.Keywords: high myopia, Bruch’s membrane opening, Bruch’s membrane opening minimum rim width, spectral domain optical coherence tomography
Procedia PDF Downloads 18944 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 100943 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips
Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley
Abstract:
French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.Keywords: french fries, mechanical properties, PEF, potatoes
Procedia PDF Downloads 236942 Extraction and Electrochemical Behaviors of Au(III) using Phosphonium-Based Ionic Liquids
Authors: Kyohei Yoshino, Masahiko Matsumiya, Yuji Sasaki
Abstract:
Recently, studies have been conducted on Au(III) extraction using ionic liquids (ILs) as extractants or diluents. ILs such as piperidinium, pyrrolidinium, and pyridinium have been studied as extractants for noble metal extractions. Furthermore, the polarity, hydrophobicity, and solvent miscibility of these ILs can be adjusted depending on their intended use. Therefore, the unique properties of ILs make them functional extraction media. The extraction mechanism of Au(III) using phosphonium-based ILs and relevant thermodynamic studies are yet to be reported. In the present work, we focused on the mechanism of Au(III) extraction and related thermodynamic analyses using phosphonium-based ILs. Triethyl-n-pentyl, triethyl-n-octyl, and triethyl-n-dodecyl phosphonium bis(trifluoromethyl-sulfonyl)amide, [P₂₂₂ₓ][NTf₂], (X = 5, 8, and 12) were investigated for Au(III) extraction. The IL–Au complex was identified as [P₂₂₂₅][AuCl₄] using UV–Vis–NIR and Raman spectroscopic analyses. The extraction behavior of Au(III) was investigated with a change in the [P₂₂₂ₓ][NTf₂]IL concentration from 1.0 × 10–4 to 1.0 × 10–1 mol dm−3. The results indicate that Au(III) can be easily extracted by the anion-exchange reaction in the [P₂₂₂ₓ][NTf₂]IL. The slope range 0.96–1.01 on the plot of log D vs log[P₂₂₂ₓ][NTf2]IL indicates the association of one mole of IL with one mole of [AuCl4−] during extraction. Consequently, [P₂₂₂ₓ][NTf₂] is an anion-exchange extractant for the extraction of Au(III) in the form of anions from chloride media. Thus, this type of phosphonium-based IL proceeds via an anion exchange reaction with Au(III). In order to evaluate the thermodynamic parameters on the Au(III) extraction, the equilibrium constant (logKₑₓ’) was determined from the temperature dependence. The plot of the natural logarithm of Kₑₓ’ vs the inverse of the absolute temperature (T–1) yields a slope proportional to the enthalpy (ΔH). By plotting T–1 vs lnKₑₓ’, a line with a slope range 1.129–1.421 was obtained. Thus, the result indicated that the extraction reaction of Au(III) using the [P₂₂₂ₓ][NTf₂]IL (X=5, 8, and 12) was exothermic (ΔH=-9.39〜-11.81 kJ mol-1). The negative value of TΔS (-4.20〜-5.27 kJ mol-1) indicates that microscopic randomness is preferred in the [P₂₂₂₅][NTf₂]IL extraction system over [P₂₂₂₁₂][NTf₂]IL. The total negative alternation in Gibbs energy (-5.19〜-6.55 kJ mol-1) for the extraction reaction would thus be relatively influenced by the TΔS value on the number of carbon atoms in the alkyl side length, even if the efficiency of ΔH is significantly influenced by the total negative alternations in Gibbs energy. Electrochemical analysis revealed that extracted Au(III) can be reduced in two steps: (i) Au(III)/Au(I) and (ii) Au(I)/Au(0). The diffusion coefficients of the extracted Au(III) species in [P₂₂₂ₓ][NTf₂] (X = 5, 8, and 12) were evaluated from 323 to 373 K using semi-integral and semi-differential analyses. Because of the viscosity of the IL medium, the diffusion coefficient of the extracted Au(III) increases with increasing alkyl chain length. The 4f7/2 spectrum based on X-ray photoelectron spectroscopy revealed that the Au electrodeposits obtained after 10 cycles of continuous extraction and electrodeposition were in the metallic state.Keywords: au(III), electrodeposition, phosphonium-based ionic liquids, solvent extraction
Procedia PDF Downloads 107941 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 128940 Relationship between Structure of Some Nitroaromatic Pollutants and Their Degradation Kinetic Parameters in UV-VIS/TIO2 System
Authors: I. Nitoi, P. Oancea, M. Raileanu, M. Crisan, L. Constantin, I. Cristea
Abstract:
Hazardous organic compounds like nitroaromatics are frequently found in chemical and petroleum industries discharged effluents. Due to their bio-refractory character and high chemical stability cannot be efficiently removed by classical biological or physical-chemical treatment processes. In the past decades, semiconductor photocatalysis has been frequently applied for the advanced degradation of toxic pollutants. Among various semiconductors titania was a widely studied photocatalyst, due to its chemical inertness, low cost, photostability and nontoxicity. In order to improve optical absorption and photocatalytic activity of TiO2 many attempts have been made, one feasible approach consists of doping oxide semiconductor with metal. The degradation of dinitrobenzene (DNB) and dinitrotoluene (DNT) from aqueous solution under UVA-VIS irradiation using heavy metal (0.5% Fe, 1%Co, 1%Ni ) doped titania was investigated. The photodegradation experiments were carried out using a Heraeus laboratory scale UV-VIS reactor equipped with a medium-pressure mercury lamp which emits in the range: 320-500 nm. Solutions with (0.34-3.14) x 10-4 M pollutant content were photo-oxidized in the following working conditions: pH = 5-9; photocatalyst dose = 200 mg/L; irradiation time = 30 – 240 minutes. Prior to irradiation, the photocatalyst powder was added to the samples, and solutions were bubbled with air (50 L/hour), in the dark, for 30 min. Dopant type, pH, structure and initial pollutant concentration influence on the degradation efficiency were evaluated in order to set up the optimal working conditions which assure substrate advanced degradation. The kinetics of nitroaromatics degradation and organic nitrogen mineralization was assessed and pseudo-first order rate constants were calculated. Fe doped photocatalyst with lowest metal content (0.5 wt.%) showed a considerable better behaviour in respect to pollutant degradation than Co and Ni (1wt.%) doped titania catalysts. For the same working conditions, degradation efficiency was higher for DNT than DNB in accordance with their calculated adsobance constants (Kad), taking into account that degradation process occurs on catalyst surface following a Langmuir-Hinshalwood model. The presence of methyl group in the structure of DNT allows its degradation by oxidative and reductive pathways, while DNB is converted only by reductive route, which also explain the highest DNT degradation efficiency. For highest pollutant concentration tested (3 x 10-4 M), optimum working conditions (0.5 wt.% Fe doped –TiO2 loading of 200 mg/L, pH=7 and 240 min. irradiation time) assures advanced nitroaromatics degradation (ηDNB=89%, ηDNT=94%) and organic nitrogen mineralization (ηDNB=44%, ηDNT=47%).Keywords: hazardous organic compounds, irradiation, nitroaromatics, photocatalysis
Procedia PDF Downloads 317939 Determine Causal Factors Affecting the Responsiveness and Productivity of Non-Governmental Universities
Authors: Davoud Maleki
Abstract:
Today, education and investment in human capital is a long-term investment without which the economy will be stagnant Stayed. Higher education represents a type of investment in human resources by providing and improving knowledge, skills and Attitudes help economic development. Providing efficient human resources by increasing the efficiency and productivity of people and on the other hand with Expanding the boundaries of knowledge and technology and promoting technology such as the responsibility of training human resources and increasing productivity and efficiency in High specialized levels are the responsibility of universities. Therefore, the university plays an infrastructural role in economic development and growth because education by creating skills and expertise in people and improving their ability.In recent decades, Iran's higher education system has been faced with many problems, therefore, scholars have looked for it is to identify and validate the causal factors affecting the responsiveness and productivity of non-governmental universities. The data in the qualitative part is the result of semi-structured interviews with 25 senior and middle managers working in the units It was Islamic Azad University of Tehran province, which was selected by theoretical sampling method. In data analysis, stepwise method and Analytical techniques of Strauss and Corbin (1992) were used. After determining the central category (answering for the sake of the beneficiaries) and using it in order to bring the categories, expressions and ideas that express the relationships between the main categories and In the end, six main categories were identified as causal factors affecting the university's responsiveness and productivity.They are: 1- Scientism 2- Human resources 3- Creating motivation in the university 4- Development based on needs assessment 5- Teaching process and Learning 6- University quality evaluation. In order to validate the response model obtained from the qualitative stage, a questionnaire The questionnaire was prepared and the answers of 146 students of Master's degree and Doctorate of Islamic Azad University located in Tehran province were received. Quantitative data in the form of descriptive data analysis, first and second stage factor analysis using SPSS and Amos23 software were analyzed. The findings of the research indicated the relationship between the central category and the causal factors affecting the response The results of the model test in the quantitative stage confirmed the generality of the conceptual model.Keywords: accountability, productivity, non-governmental, universities, foundation data theory
Procedia PDF Downloads 60938 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 141937 A Report of 5-Months-Old Baby with Balanced Chromosomal Rearrangements along with Phenotypic Abnormalities
Authors: Mohit Kumar, Beklashwar Salona, Shiv Murti, Mukesh Singh
Abstract:
We report here a case of five-months old male baby, born as second child of non-consanguineous parents with no considerable history of genetic abnormality which was referred to our cytogenetic laboratory for chromosomal analysis. Physical dysmorphic facial features including mongoloid face, cleft palate, simian crease, and developmental delay were observed. We present this case with unique balanced autosomal translocation of t(3;10)(p21;p13). The risk of phenotypic abnormalities based on de novo balanced translocation was estimated to be 7%. The association of balanced chromosomal rearrangement with Down syndrome features such as multiple congenital anomalies, facial dysmorphism and congenital heart anomalies are very rare in a 5-months old male child. Trisomy-21 is not uncommon in chromosomal abnormality with the birth defect and balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion (RSA). Two ml heparinized peripheral blood cells cultured in RPMI-1640 for 72 hours supplemented with 20% fetal bovine serum, phytohemagglutinin (PHA), and antibiotics were used for chromosomal analysis. A total 30 metaphases images were captured using Olympus-BX51 microscope and analyzed using Bio-view karyotyping software through GTG-banding (G bands by trypsin and Giemsa) according to International System for Human Cytogenetic Nomenclature 2016. The results showed balanced translocation between short arm of chromosome # 3 and short arm of chromosome # 10. The karyotype of the child was found to be 46,XY,t(3;10)(p21; p13). Chromosomal abnormalities are one of the major causes of birth defect in new born babies. Also, balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion. The index case presented with dysmorphic facial features and had a balanced translocation 46,XY,t(3;10)(p21;p13). This translocation with break points at (p21; p13) has not been reported in the literature in a child with facial dysmorphism. To the best of our knowledge, this is the first report of novel balanced translocation t(3;10) with break points in a child with dysmorphic features. We found balanced chromosomal translocation instead of any trisomy or unbalanced aberrations along with some phenotypic abnormalities. Therefore, we suggest that such novel balanced translocation with abnormal phenotype should be reported in order to enable the pathologist, pediatrician, and gynecologist to have a better insight into the intricacies of chromosomal abnormalities and their associated phenotypic features. We hypothesized that dysmorphic features as seen in this case may be the result of change in the pattern of genes located at the breakpoint area in balanced translocations or may be due to deletion or mutation of genes located on the p-arm of chromosome # 3 and p-arm of chromosome # 10.Keywords: balanced translocation, karyotyping, phenotypic abnormalities, facial dimorphisms
Procedia PDF Downloads 209936 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 47935 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 147934 The Effects of Chamomile on Serum Levels of Inflammatory Indexes to a Bout of Eccentric Exercise in Young Women
Authors: K. Azadeh, M. Ghasemi, S. Fazelifar
Abstract:
Aim: Changes in stress hormones can be modify response of immune system. Cortisol as the most important body corticosteroid is anti-inflammatory and immunosuppressive hormone. Normal levels of cortisol in humans has fluctuated during the day, In other words, cortisol is released periodically, and regulate through the release of ACTH circadian rhythm in every day. Therefore, the aim of this study was to determine the effects of Chamomile on serum levels of inflammatory indexes to a bout of eccentric exercise in young women. Methodology: 32 women were randomly divided into 4 groups: high dose of Chamomile, low dose of Chamomile, ibuprofen and placebo group. Eccentric exercise included 5 set and rest period between sets was 1 minute. For this purpose, subjects warm up 10 min and then done eccentric exercise. Each participant completed 15 repetitions with optional 20 kg weight or until can’t continue moving. When the subject was no longer able to continue to move, immediately decreased 5 kg from the weight and the protocol continued until cause exhaustion or complete 15 repetitions. Also, subjects received specified amount of ibuprofen and Chamomile capsules in target groups. Blood samples in 6 stages (pre of starting pill, pre of exercise protocol, 4, 24, 48 and 72 hours after eccentric exercise) was obtained. The levels of cortisol and adrenocorticotropic hormone levels were measured by ELISA way. K-S test to determine the normality of the data and analysis of variance for repeated measures was used to analyze the data. A significant difference in the p < 0/05 accepted. Results: The results showed that Individual characteristics including height, weight, age and body mass index were not significantly different among the four groups. Analyze of data showed that cortisol and ACTH basic levels significantly decreased after supplementation consumption, but then gradually significantly increased in all stages of post exercise. In High dose of Chamomile group, increasing tendency of post exercise somewhat less than other groups, but not to a significant level. The inter-group analysis results indicate that time effect had a significant impact in different stages of the groups. Conclusion: The results of this study, one session of eccentric exercise increased cortisol and ACTH hormone. The results represent the effect of high dose of Chamomile in the prevention and reduction of increased stress hormone levels. As regards use of medicinal plants and ibuprofen as a pain medication and inflammation has spread among athletes and non-athletes, the results of this research can provide information about the advantages and disadvantages of using medicinal plants and ibuprofen.Keywords: chamomile, inflammatory indexes, eccentric exercise, young girls
Procedia PDF Downloads 417933 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation
Authors: Austin Vander Wel, Katherin Vargas Henao
Abstract:
Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation
Procedia PDF Downloads 72932 Traditional Medicine and Islamic Holistic Approach in Palliative Care Management of Terminal Illpatient of Cancer
Authors: Mohammed Khalil Ur Rahman, Mohammed Alsharon, Arshad Muktar, Zahid Shaik
Abstract:
Any ailment can go into terminal stages, cancer being one such disease which is many times detected in latent stages. Cancer is often characterized by constitutional symptoms which are agonizing in nature which disturbs patients and their family as well. In order to relieve such intolerable symptoms treatment modality employed is known to be ‘Palliative Care’. The goal of palliative care is to enhance patient’s quality of life by relieving or rather reducing the distressing symptoms of patients such as pain, nausea/ vomiting, anorexia/loss of appetite, excessive salivation, mouth ulcers, weight loss, constipation, oral thrush, emaciation etc. which are due to the effect of disease or due to the undergoing treatment such as chemotherapy, radiation etc. Ayurveda and Unani as well as other traditional medicines is getting more and more international attention in recent years and Ayurveda and Unani holistic perspective of the disease, it seems that there are many herbs and herbomineral preparation which can be employed in the treatment of malignancy and also in palliative care. Though many of them have yet to be scientifically proved as anti-cancerous but there is definitely a positive lead that some of these medications relieve the agonising symptoms thereby making life of the patient easy. Health is viewed in Islam in a holistic way. One of the names of the Quran is al-shifa' meaning ‘that which heals’ or ‘the restorer of health’ to refer to spiritual, intellectual, psychological, and physical health. The general aim of medical science, according to Islam, is to secure and adopt suitable measures which, with Allah’s permission, help to preserve or restore the health of the human body. Islam motivates the Physician to view the patient as one organism. The patient has physical, social, psychological, and spiritual dimensions that must be considered in synthesis with an integrated, holistic approach. Aims & Objectives: - To suggest herbs which are mentioned in Ayurveda Unani with potential palliative activity in case of Cancer patients. - Most of tibb nabawi [Prophetic Medicine] is preventive medicine and must have been divinely inspired. - Spiritual Aspects of Healing: Prayer, dua, recitation of the Quran - Remembrance of Allah play a central role.Materials & Method: Literary review of the herbs supported with experiential evidence will be discussed. Discussion: On the basis of collected data subject will be discussed in length. Conclusion: Will be presented in paper.Keywords: palliative care, holistic, Ayurvedic and Unani traditional system of medicine, Quran, hadith
Procedia PDF Downloads 339931 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 236930 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 232929 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 203928 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 24927 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools
Authors: Kim Norgaard Helmersen, Jan Silberberger
Abstract:
Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture
Procedia PDF Downloads 130926 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 107925 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 51