Search results for: pregnancy related illnesses
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9958

Search results for: pregnancy related illnesses

658 The Influence of Gender and Sexual Orientation on Police Decisions in Intimate Partner Violence Cases

Authors: Brenda Russell

Abstract:

Police officers spend a great deal of time responding to domestic violence calls. Recent research has found that men and women in heterosexual and same-sex relationships are equally likely to initiate intimate partner violence IPV) and likewise susceptible to victimization, yet police training tends to focus primarily on male perpetration and female victimization. Criminal justice studies have found that male perpetrators of IPV are blamed more than female perpetrators who commit the same offense. While previous research has examined officer’s response in IPV cases with male and female heterosexual offenders, research has yet to investigate police response in same-sex relationships. This study examined officers’ decisions to arrest, perceptions of blame, perceived danger to others, disrespect, and beliefs in prosecution, guilt and sentencing. Officers in the U.S. (N = 248) were recruited using word of mouth and access to police association websites where a link to an online study was made available. Officers were provided with one of 4 experimentally manipulated scenarios depicting a male or female perpetrator (heterosexual or same-sex) in a clear domestic assault situation. Officer age, experience with IPV and IPV training were examined as possible covariates. Training in IPV was not correlated to any dependent variable of interest. Age was correlated with perpetrator arrest and blame (.14 and .16, respectively) and years of experience was correlated to arrest, offering informal advice, and mediating the incident (.14 to -.17). A 2(perpetrator gender) X 2 (victim gender) factorial design was conducted. Results revealed that officers were more likely to provide informal advice and mediate in gay male relationships, and were less likely to arrest perpetrators in same-sex relationships. When officer age and years of experience with domestic violence were statistically controlled, effects for perpetrator arrest and providing informal advice were no longer significant. Officers perceived heterosexual male perpetrators as more dangerous, blameworthy, disrespectful, and believed they would receive significantly longer sentences than all other conditions. When officer age and experience were included as covariates in the analyses perpetrator blame was no longer statistically significant. Age, experience and training in IPV were not related to perceptions of victims. Police perceived victims as more truthful and believable when the perpetrator was a male. Police also believed victims of female perpetrators were more responsible for their own victimization. Victims were more likely to be perceived as a danger to their family when the perpetrator was female. Female perpetrators in same-sex relationships and heterosexual males were considered to experience more mental illness than heterosexual female or gay male perpetrators. These results replicate previous research suggesting male perpetrators are more blameworthy and responsible for their own victimization, yet expands upon previous research by identifying potential biases in police response to IPV in same-sex relationships. This study brings to the forefront the importance of evidence-based officer training in IPV and provides insight into the need for a gender inclusive approach as well as addressing the necessity of the practical applications for police.

Keywords: domestic violence, heterosexual, intimate partner violence, officer response, police officer, same-sex

Procedia PDF Downloads 329
657 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 134
656 Greek Tragedy on the American Stage until the First Half of 20ᵗʰ: Identities and Intersections between Greek, Italian and Jewish Community Theatre

Authors: Papazafeiropoulou Olga

Abstract:

The purpose of this paper focuses on exploring the emergence of Greek tragedy on the American stage until the first half of the 20th century through the intellectual processes and contributions of Greek, Italian and Jewish community theatre. Drawing on a wide range of sources, we trace Greek tragedy on the American stage, exploring the intricate processes of community’s theatre identities. The announcement aims to analyze the distinct yet related efforts of first Americans to intersect with Greek tragedy, searching simultaneously for the identities of immigrants. Ultimately, ancient drama became a vehicle not only for great developments in the American theater. In 1903, the Greek actor Dionysios Taboularis arrived in America, while the immigrant stream from Greece to America brought his artistic heritage, presenting in “Hall House” of Chicago the play Return. In 1906, in New York, an amateur group presented the play The Alosi of Messolonghi, and the next year in Chicago, an attempt was noted with a dramatic romance. In the decade 1907-1917, Nikolaos Matsoukas founded and directed the “Arbe theater”, while Petros Kotopoulis formed a troupe. In 1930, one of the greatest Greek theatrical events was the arrival of Marika’s Kotopoulis. Also, members of Vrysoula’s Pantopoulos formed the “Athenian Operetta”, with a positive influence on Greek American theatre. Italian immigrant community, located in tenement “Little Italies” throughout the city, and soon amateur theatrical clubs evolved. The earliest was the “Circolo Filodrammatico Italo-Americano” in 1880. Fausto Malzone’s artistic direction paved the way for the professional Italian immigrant theatre. Immigrant audiences heard the plays of their homeland, representing a major transition for this ethnic theatre. In 1900, the community had produced the major forces that created the professional theatre. By l905, the Italian American theatre had become firmly rooted in its professional phase. Yiddish Theater was both an import and a home-grown phenomenon. In 1878, The Sorceress was brought to America by Boris Thomashefsky. Between 1890 and 1940, many Yiddish theater companies appeared in America, presenting adaptations of classical plays. Αmerica’s people's first encounter with ancient texts was mostly academic. The tracing of tragedy as a form and concept that follows the evolutionary course of domestic social, aesthetic, and political ferments according to the international trends and currents draws conclusions about the early Greek, Italian, and Jewish immigrant’s theatre in relationship to the American scene until the first half of 20th century. Presumably, community theater acquired identity by intersecting with the spiritual reception of tragedy in America.

Keywords: American, community, Greek, Italian, identities, intersection, Jewish, theatre, tragedy

Procedia PDF Downloads 51
655 Evaluation of Anti-inflammatory Activities of Extracts Obtained from Capparis Erythrocarpos In-Vivo

Authors: Benedict Ofori, Kwabena Sarpong, Stephen Antwi

Abstract:

Background: Medicinal plants are utilized all around the world and are becoming increasingly important economically. The WHO notes that ‘inappropriate use of traditional medicines or practices can have negative or dangerous effects and that future research is needed to ascertain the efficacy and safety of such practices and medicinal plants used by traditional medicine systems. The poor around the world have limited access to palliative care or pain relief. Pharmacologists have been focused on developing safe and effective anti-inflammatory drugs. Most of the issues related to their use have been linked to the fact that numerous traditional and herbal treatments are classified in different nations as meals or dietary supplements. As a result, there is no need for evidence of the quality, efficacy, or safety of these herbal formulations before they are marketed. The fact that access to drugs meant for pain relief is limited in low-income countries means advanced studies should be done on home drugs meant for inflammation to close the gap. Methods: The ethanolic extracts of the plant were screened for the presence of 10 phytochemicals. The Pierce BCA Protein Assay Kit was used for the determination of the protein concentration of the egg white. The rats were randomly selected and put in 6 groups. The egg white was sub-plantar injected into the right-hand paws of the rats to induce inflammation. The animals were treated with the three plant extracts obtained from the root bark, stem, and leaves of the plant. The control groups were treated with normal saline, while the standard groups were treated with standard drugs indomethacin and celecoxib. Plethysmometer was used to measure the change in paw volume of the animals over the course of the experiment. Results: The results of the phytochemical screening revealed the presence of reducing sugars and saponins. Alkaloids were present in only R.L.S (1:1:1), and phytosterols were found in R.L(1:1) and R.L.S (1:1:1). The estimated protein concentration was found to be 103.75 mg/ml. The control group had an observable increase in paw volume, which indicated that inflammation was induced during the 5 hours. The increase in paw volume for the control group peaked at the 1st hour and decreased gradually throughout the experiment, with minimal changes in the paw volumes. The 2nd and 3rd groups were treated with 20 mg/kg of indomethacin and celecoxib. The anti-inflammatory activities of indomethacin and celecoxib were calculated to be 21.4% and 4.28%, respectively. The remaining 3 groups were treated with 2 dose levels of 200mg/kg plant extracts. R.L.S, R.L, and S.R.L had anti-inflammatory activities of 22.3%, 8.2%, and 12.07%, respectively. Conclusions: Egg albumin-induced paw model in rats can be used to evaluate the anti-inflammatory activity of herbs that might have potential anti-inflammatory activity. Herbal medications have potential anti-inflammatory activities and can be used to manage various inflammatory conditions if their efficacy and side effects are well studied. The three extracts all possessed anti-inflammatory activity, with R.L.S having the highest anti-inflammatory activity.

Keywords: inflammation, capparis erythrocarpos, anti-inflammatory activity, herbal medicine, paw volume, egg albumin

Procedia PDF Downloads 70
654 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 407
653 An Approach to Addressing Homelessness in Hong Kong: Life Story Approach

Authors: Tak Mau Simon Chan, Ying Chuen Lance Chan

Abstract:

Homelessness has been a popular and controversial debate in Hong Kong, a city which is densely populated and well-known for very expensive housing. The constitution of the homeless as threats to the community and environmental hygiene is ambiguous and debatable in the Hong Kong context. The lack of an intervention model is the critical research gap thus far, aside from the tangible services delivered. The life story approach (LSA), with its unique humanistic orientation, has been well applied in recent decades to depict the needs of various target groups, but not the homeless. It is argued that the life story approach (LSA), which has been employed by health professionals in the landscape of dementia, and health and social care settings, can be used as a reference in the local Chinese context through indigenization. This study, therefore, captures the viewpoints of service providers and users by constructing an indigenous intervention model that refers to the LSA in serving the chronically homeless. By informing 13 social workers and 27 homeless individuals in 8 focus groups whilst 12 homeless individuals have participated in individual in-depth interviews, a framework of LSA in homeless people is proposed. Through thematic analysis, three main themes of their life stories was generated, namely, the family, negative experiences and identity transformation. The three domains solidified framework that not only can be applied to the homeless, but also other disadvantaged groups in the Chinese context. Based on the three domains of family, negative experiences and identity transformation, the model is applied in the daily practices of social workers who help the homeless. The domain of family encompasses familial relationships from the past to the present to the speculated future with ten sub-themes. The domain of negative experiences includes seven sub-themes, with reference to the deviant behavior committed. The last domain, identity transformation, incorporates the awareness and redefining of one’s identity and there are a total of seven sub-themes. The first two domains are important components of personal histories while the third is more of an unknown, exploratory and yet to-be-redefined territory which has a more positive and constructive orientation towards developing one’s identity and life meaning. The longitudinal temporal dimension of moving from the past – present - future enriches the meaning making process, facilitates the integration of life experiences and maintains a more hopeful dialogue. The model is tested and its effectiveness is measured by using qualitative and quantitative methods to affirm the extent that it is relevant to the local context. First, it contributes to providing a clear guideline for social workers who can use the approach as a reference source. Secondly, the framework acts as a new intervention means to address problem saturated stories and the intangible needs of the homeless. Thirdly, the model extends the application to beyond health related issues. Last but not least, the model is highly relevant to the local indigenous context.

Keywords: homeless, indigenous intervention, life story approach, social work practice

Procedia PDF Downloads 278
652 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 508
651 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 269
650 Food Sovereignty as Local Resistance to Unequal Access to Food and Natural Resources in Latin America: A Gender Perspective

Authors: Ana Alvarenga De Castro

Abstract:

Food sovereignty has been brought by the international peasants’ movement, La Via Campesina, as a precondition to food security, speaking about the right of each nation to keep its own supply of foods respecting cultural, sustainable practices and productive diversity. The political conceptualization nowadays goes beyond saying that this term is about achieving the rights of farmers to control the food systems according to local specificities, and about equality in the access to natural resources and quality food. The current feminization of agroecosystems and of food insecurity identified by researchers and recognized by international agencies like the UN and FAO has enhanced the feminist discourse into the food sovereignty movement, considering the historical inequalities that place women farmers in subaltern positions inside the families and rural communities. The current tendency in many rural areas of more women taking responsibility for food production and still facing the lack of access to natural resources meets particular aspects in Latin America due to the global economic logic which places the Global South in the position of raw material supplier for the industrialized North, combined with regional characteristics. In this context, Latin American countries play the role of commodities exporters in the international labor division, including among exported items grains, soybean paste, and ores, to the expense of local food chains which provide domestic quality food supply under more sustainable practices. The connections between gender inequalities and global territorial inequalities related to the access and control of food and natural resources are pointed out by feminist political ecology - FPE - authors, and are linked in this article to the potentialities and limitations of women farmers to reproduce diversified agroecosystems in the tropical environments. The work brings the importance of local practices held by women farmers which are crucial to maintaining sustainable agricultural systems and their results on seeds, soil, biodiversity and water conservation. This work presents an analysis of documents, releases, videos and other publicized experiences launched by some peasants’ organizations in Latin America which evidence the different technical and political answers that meet food sovereignty from peasants’ groups that are attributed to women farmers. They are associated with articles presenting the empirical analysis of women farmers' practices in Latin America. The combination drove to discuss the benefits of peasants' conceptions about food systems and their connections with local realities and the gender issues linked to the food sovereignty conceptualization. Conclusion meets that reality on the field cannot reach food sovereignty's ideal homogeneously and that agricultural sustainable practices are dependent on rights' achievement and social inequalities' eradication.

Keywords: food sovereignty, gender, diversified agricultural systems, access to natural resources

Procedia PDF Downloads 228
649 The Effect of Teachers' Personal Values on the Perceptions of the Effective Principal and Student in School

Authors: Alexander Zibenberg, Rima’a Da’As

Abstract:

According to the author’s knowledge, individuals are naturally inclined to classify people as leaders and followers. Individuals utilize cognitive structures or prototypes specifying the traits and abilities that characterize the effective leader (implicit leadership theories) and effective follower in an organization (implicit followership theories). Thus, the present study offers insights into understanding how teachers' personal values (self-enhancement and self-transcendence) explain the preference for styles of effective leader (i.e., principal) and assumptions about the traits and behaviors that characterize effective followers (i.e., student). Beyond the direct effect on perceptions of effective types of leader and follower, the present study argues that values may also interact with organizational and personal contexts in influencing perceptions. Thus authors suggest that teachers' managerial position may moderate the relationships between personal values and perception of the effective leader and follower. Specifically, two key questions are addressed in the present research: (1) Is there a relationship between personal values and perceptions of the effective leader and effective follower? and (2) Are these relationships stable or could they change across different contexts? Two hundred fifty-five Israeli teachers participated in this study, completing questionnaires – about the effective student and effective principal. Results of structural equations modeling (SEM) with maximum likelihood estimation showed: first: the model fit the data well. Second: researchers found a positive relationship between self-enhancement and anti-prototype of the effective principal and anti-prototype of the effective student. The relationship between self-transcendence value and both perceptions were found significant as well. Self-transcendence positively related to the way the teacher perceives the prototype of the effective principal and effective student. Besides, authors found that teachers' managerial position moderates these relationships. The article contributes to the literature both on perceptions and on personal values. Although several earlier studies explored issues of implicit leadership theories and implicit followership theories, personality characteristics (values) have garnered less attention in this matter. This study shows that personal values which are deeply rooted, abstract motivations that guide justify or explain attitudes, norms, opinions and actions explain differences in perception of the effective leader and follower. The results advance the theoretical understanding of the relationship between personal values and individuals’ perceptions in organizations. An additional contribution of this study is the application of the teacher's managerial position to explain a potential boundary condition of the translation of personal values into outcomes. The findings suggest that through the management process in the organization, teachers acquire knowledge and skills which augment their ability (beyond their personal values) to predict perceptions of ideal types of principal and student. The study elucidates the unique role of personal values in understanding an organizational thinking in organization. It seems that personal values might explain the differences in individual preferences of the organizational paradigm (mechanistic vs organic).

Keywords: implicit leadership theories, implicit followership theories, organizational paradigms, personal values

Procedia PDF Downloads 141
648 Change of Education Business in the Age of 5G

Authors: Heikki Ruohomaa, Vesa Salminen

Abstract:

Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.

Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation

Procedia PDF Downloads 161
647 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 92
646 Production of Functional Crackers Enriched with Olive (Olea europaea L.) Leaf Extract

Authors: Rosa Palmeri, Julieta I. Monteleone, Antonio C. Barbera, Carmelo Maucieri, Aldo Todaro, Virgilio Giannone, Giovanni Spagna

Abstract:

In recent years, considerable interest has been shown in the functional properties of foods, and a relevant role has been played by phenolic compounds, able to scavenge free radicals. A more sustainable agriculture has to emerge to guarantee food supply over the next years. Wheat, corn, and rice are the most common cereals cultivated, but also other cereal species, such as barley can be appreciated for their peculiarities. Barley (Hordeum vulgare L.) is a C3 winter cereal that shows high resistance at drought and salt stresses. There are growing interests in barley as ingredient for the production of functional foods due to its high content of phenolic compounds and Beta-glucans. In this respect, the possibility of separating specific functional fractions from food industry by-products looks very promising. Olive leaves represent a quantitatively significant by-product of olive grove farming, and are an interesting source of phenolic compounds. In particular, oleuropein, which provide important nutritional benefits, is the main phenolic compound in olive leaves and ranges from 17% to 23% depending upon the cultivar and growing season period. Together with oleuropein and its derivatives (e.g. dimethyloleuropein, oleuropein diglucoside), olive leaves further contain tyrosol, hydroxytyrosol, and a series of secondary metabolities structurally related to them: verbascoside, ligstroside, hydroxytyrosol glucoside, tyrosol glucoside, oleuroside, oleoside-11-methyl ester, and nuzhenide. Several flavonoids, flavonoid glycosides, and phenolic acids have also described in olive leaves. The aim of this work was the production of functional food with higher content of polyphenols and the evaluation of their shelf life. Organic durum wheat and barley grains contain higher levels of phenolic compounds were used for the production of crackers. Olive leaf extract (OLE) was obtained from cv. ‘Biancolilla’ by aqueous extraction method. Two baked goods trials were performed with both organic durum wheat and barley flours, adding olive leaf extract. Control crackers, made as comparison, were produced with the same formulation replacing OLE with water. Total phenolic compound, moisture content, activity water, and textural properties at different time of storage were determined to evaluate the shelf-life of the products. Our the preliminary results showed that the enriched crackers showed higher phenolic content and antioxidant activity than control. Alternative uses of olive leaf extracts for crackers production could represent a good candidate for the addition of functional ingredients because bakery items are daily consumed, and have long shelf-life.

Keywords: barley, functional foods, olive leaf, polyphenols, shelf life

Procedia PDF Downloads 284
645 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 340
644 Elevated Celiac Antibodies and Abnormal Duodenal Biopsies Associated with IBD Markers: Possible Role of Altered Gut Permeability and Inflammation in Gluten Related Disorders

Authors: Manav Sabharwal, Ruda Rai Md, Candace Parker, James Ridley

Abstract:

Wheat is one of the most commonly consumed grains worldwide, which contains gluten. Nowadays, gluten intake is considered to be a trigger for GRDs, including Celiac disease (CD), a common genetic disease affecting 1% of the US population, non-celiac gluten sensitivity (NCGS) and wheat allergy. NCGS is being recognized as an acquired gluten-sensitive enteropathy that is prevalent across age, ethnic and geographic groups. The cause of this entity is not fully understood, and recent studies suggest that it is more common in participants with irritable bowel syndrome (IBS), with iron deficiency anemia, symptoms of fatigue, and has considerable overlap in symptoms with IBS and Crohn’s disease. However, these studies were lacking in availability of complete serologies, imaging tests and/or pan-endoscopy. We performed a prospective study of 745 adult patients who presented to an outpatient clinic for evaluation of chronic upper gastro-intestinal symptoms and subsequently underwent an upper endoscopic (EGD) examination as standard of care. Evaluation comprised of comprehensive celiac antibody panel, inflammatory bowel disease (IBD) serologic markers, duodenal biopsies and Small Bowel Video Capsule Endoscopy (VCE), when available. At least 6 biopsy specimens were obtained from the duodenum and proximal jejunum during EGD, and CD3+ Intraepithelial lymphocytes (IELs) and villous architecture were evaluated by a single experienced pathologist, and VCE was performed by a single experienced gastroenterologist. Of the 745 patients undergoing EGD, 12% (93/745) patients showed elevated CD3+ IELs in the duodenal biopsies. 52% (387/745) completed a comprehensive CD panel and 7.2% (28/387) were positive for at least 1 CD antibody (Tissue transglutaminase (tTG), being the most common antibody in 65% (18/28)). Of these patients, 18% (5/28) showed increased duodenal CD3+ IELs, but 0% showed villous blunting or distortion to meet criteria for CD. Surprisingly, 43% (12/28) were positive for at 1 IBD serology (ASCA, ANCA or expanded IBD panel (LabCorp)). Of these 28 patients, 29% (8/28) underwent a SB VCE, of which 100 % (8/8) showed significant jejuno-ileal mucosal lesions diagnostic for IBD. Findings of abnormal CD antibodies (7.2%, 28/387) and increased CD3+ IELs on duodenal biopsy (12%, 93/745) were observed frequently in patients with UGI symptoms undergoing EGD in an outpatient clinic. None met criteria for CD, and a high proportion (43%, 12/28) showed evidence of overlap with IBD. This suggests a potential causal link of acquired GRDs to underlying inflammation and gut mucosal barrier disruption. Further studies to investigate a role for abnormal antigen presentation of dietary gluten to gut associated lymphoid tissue as a cause are justified. This may explain a high prevalence of GRDs in the population and correlation with IBS, IBD and other gut inflammatory disorders.

Keywords: celiac, gluten sensitive enteropathy, lymphocitic enteritis, IBS, IBD

Procedia PDF Downloads 141
643 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach

Authors: Geraldine G. Granados Vazquez

Abstract:

Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.

Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability

Procedia PDF Downloads 206
642 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 210
641 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data

Authors: Linyi Fan, C.J. Schumaker

Abstract:

Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.

Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling

Procedia PDF Downloads 118
640 Kansei Engineering Applied to the Design of Rural Primary Education Classrooms: Design-Based Learning Case

Authors: Jimena Alarcon, Andrea Llorens, Gabriel Hernandez, Maritza Palma, Lucia Navarrete

Abstract:

The research has funding from the Government of Chile and is focused on defining the design of rural primary classroom that stimulates creativity. The relevance of the study consists of its capacity to define adequate educational spaces for the implementation of the design-based learning (DBL) methodology. This methodology promotes creativity and teamwork, generating a meaningful learning experience for students, based on the appreciation of their environment and the generation of projects that contribute positively to their communities; also, is an inquiry-based form of learning that is based on the integration of design thinking and the design process into the classroom. The main goal of the study is to define the design characteristics of rural primary school classrooms, associated with the implementation of the DBL methodology. Along with the change in learning strategies, it is necessary to change the educational spaces in which they develop. The hypothesis indicates that a change in the space and equipment of the classrooms based on the emotions of the students will motivate better learning results based on the implementation of a new methodology. In this case, the pedagogical dynamics require an important interaction between the participants, as well as an environment favorable to creativity. Methodologies from Kansei engineering are used to know the emotional variables associated with their definition. The study is done to 50 students between 6 and 10 years old (average age of seven years), 48% of men and 52% women. Virtual three-dimensional scale models and semantic differential tables are used. To define the semantic differential, self-applied surveys were carried out. Each survey consists of eight separate questions in two groups: question A to find desirable emotions; question B related to emotions. Both questions have a maximum of three alternatives to answer. Data were tabulated with IBM SPSS Statistics version 19. Terms referred to emotions are grouped into twenty concepts with a higher presence in surveys. To select the values obtained as part of the implementation of Semantic Differential, a number expected of 'chi-square test (x2)' frequency calculated for classroom space is considered lower limit. All terms over the N expected a cut point, are included to prepare tables for surveys to find a relation between emotion and space. Statistic contrast (Chi-Square) represents significance level ≥ 0, indicator that frequencies appeared are not random. Then, the most representative terms depend on the variable under study: a) definition of textures and color of vertical surfaces is associated with emotions such as tranquility, attention, concentration, creativity; and, b) distribution of the equipment of the rooms, with emotions associated with happiness, distraction, creativity, freedom. The main findings are linked to the generation of classrooms according to diverse DBL team dynamics. Kansei engineering is the appropriate methodology to know the emotions that students want to feel in the classroom space.

Keywords: creativity, design-based learning, education spaces, emotions

Procedia PDF Downloads 133
639 Technology and the Need for Integration in Public Education

Authors: Eric Morettin

Abstract:

Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.

Keywords: cybersecurity, education, curriculum, teachers

Procedia PDF Downloads 61
638 [Keynote Talk]: New Generations and Employment: An Exploratory Study about Tensions between the Psycho-Social Characteristics of the Generation Z and Expectations and Actions of Organizational Structures Related with Employment (CABA, 2016)

Authors: Esteban Maioli

Abstract:

Generational studies have an important research tradition in social and human sciences. On the one hand, the speed of social change in the context of globalization imposes the need to research the transformations are identified both the subjectivity of the agents involved and its inclusion in the institutional matrix, specifically employment. Generation Z, (generally considered as the population group whose birth occurs after 1995) have unique psycho-social characteristics. Gen Z is characterized by a different set of values, beliefs, attitudes and ambitions that impact in their concrete action in organizational structures. On the other hand, managers often have to deal with generational differences in the workplace. Organizations have members who belong to different generations; they had never before faced the challenge of having such a diverse group of members. The members of each historical generation are characterized by a different set of values, beliefs, attitudes and ambitions that are manifest in their concrete action in organizational structures. Gen Z it’s the only one who can fully be considered "global," while its members were born in the consolidated context of globalization. Some salient features of the Generation Z can be summarized as follows. They’re the first fully born into a digital world. Social networks and technology are integrated into their lives. They are concerned about the challenges of the modern world (poverty, inequality, climate change, among others). They are self-expressive, more liberal and open to change. They often bore easily, with short attention spans. They do not like routine tasks. They want to achieve a good life-work balance, and they are interested in a flexible work environment, as opposed to traditional work schedule. They are critical thinkers, who come with innovative and creative ideas to help. Research design considered methodological triangulation. Data was collected with two techniques: a self-administered survey with multiple choice questions and attitudinal scales applied over a non-probabilistic sample by reasoned decision. According to the multi-method strategy, also it was conducted in-depth interviews. Organizations constantly face new challenges. One of the biggest ones is to learn to manage a multi-generational scope of work. While Gen Z has not yet been fully incorporated (expected to do so in five years or so), many organizations have already begun to implement a series of changes in its recruitment and development. The main obstacle to retaining young talent is the gap between the expectations of iGen applicants and what companies offer. Members of the iGen expect not only a good salary and job stability but also a clear career plan. Generation Z needs to have immediate feedback on their tasks. However, many organizations have yet to improve both motivation and monitoring practices. It is essential for companies to take a review of organizational practices anchored in the culture of the organization.

Keywords: employment, expectations, generation Z, organizational culture, organizations, psycho-social characteristics

Procedia PDF Downloads 190
637 A Cross Cultural Study of Jewish and Arab Listeners: Perception of Harmonic Sequences

Authors: Roni Granot

Abstract:

Musical intervals are the building blocks of melody and harmony. Intervals differ in terms of their size, direction, or quality as consonants or dissonants. In Western music, perceptual dissonance is mostly associated with the sensation of beats or periodicity, whereas cognitive dissonance is associated with rules of harmony and voice leading. These two perceptions can be studied separately in musical cultures which include melodic with little or no harmonic structures. In the Arab musical system, there is a number of different quarter- tone intervals creating various combinations of consonant and dissonant intervals. While traditional Arab music includes only melody, today’s Arab pop music includes harmonization of songs, often using typical Western harmonic sequences. Therefore, the Arab population in Israel presents an interesting case which enables us to examine the distinction between perceptual and cognitive dissonance. In the current study, we compared the responses of 34 Jewish Western listeners and 56 Arab listeners to two types of stimuli and their relationships: Harmonic sequences and isolated harmonic intervals (dyads). Harmonic sequences were presented in synthesized piano tones and represented five levels of Harmonic prototypicality (Tonic ending; Tonic ending with half flattened third; Deceptive cadence; Half cadence; and Dissonant unrelated ending) and were rated on 5-point scales of closure and surprise. Here we report only findings related to the harmonic sequences. One-way repeated measures ANOVA with one within subjects factor with five levels (Type of sequence) and one between- subjects factor (Musical background) indicates a main effect of Type of sequence for surprise ratings F (4, 85) = 51 p<.001, and for closure ratings F (4, 78) 9.54 p < .001, no main effect of Background on either surprise or closure ratings, and a marginally significant Type X Background interaction for surprise F (4, 352) = 6.05 p = .069 and closure ratings F (4, 324) 3.89 p < .01). Planned comparisons show that the interaction of Type of sequence X Background center around surprise and closure ratings of the regular versus the half- flattened third tonic and the deceptive versus the half cadence. The half- flattened third tonic is rated as less surprising and as demanding less continuation than the regular tonic by the Arab listeners as compared to the Western listeners. In addition, the half cadence is rated as more surprising but demanding less continuation than the deceptive cadence in the Arab listeners as compared to the Western listeners. Together, our results suggest that despite the vast exposure of Arab listeners to Western harmony, sensitivity to harmonic rules seems to be partial with preference to oriental sonorities such as half flattened third. In addition, the percept of directionality which demands sensitivity to the level on which closure is obtained and which is strongly entrenched in Western harmony, may not be fully integrated into the Arab listeners’ mental harmonic scheme. Results will be discussed in terms of broad differences between Western and Eastern aesthetic ideals.

Keywords: harmony, cross cultural, Arab music, closure

Procedia PDF Downloads 263
636 Unifying RSV Evolutionary Dynamics and Epidemiology Through Phylodynamic Analyses

Authors: Lydia Tan, Philippe Lemey, Lieselot Houspie, Marco Viveen, Darren Martin, Frank Coenjaerts

Abstract:

Introduction: Human respiratory syncytial virus (hRSV) is the leading cause of severe respiratory tract infections in infants under the age of two. Genomic substitutions and related evolutionary dynamics of hRSV are of great influence on virus transmission behavior. The evolutionary patterns formed are due to a precarious interplay between the host immune response and RSV, thereby selecting the most viable and less immunogenic strains. Studying genomic profiles can teach us which genes and consequent proteins play an important role in RSV survival and transmission dynamics. Study design: In this study, genetic diversity and evolutionary rate analysis were conducted on 36 RSV subgroup B whole genome sequences and 37 subgroup A genome sequences. Clinical RSV isolates were obtained from nasopharyngeal aspirates and swabs of children between 2 weeks and 5 years old of age. These strains, collected during epidemic seasons from 2001 to 2011 in the Netherlands and Belgium by either conventional or 454-sequencing. Sequences were analyzed for genetic diversity, recombination events, synonymous/non-synonymous substitution ratios, epistasis, and translational consequences of mutations were mapped to known 3D protein structures. We used Bayesian statistical inference to estimate the rate of RSV genome evolution and the rate of variability across the genome. Results: The A and B profiles were described in detail and compared to each other. Overall, the majority of the whole RSV genome is highly conserved among all strains. The attachment protein G was the most variable protein and its gene had, similar to the non-coding regions in RSV, more elevated (two-fold) substitution rates than other genes. In addition, the G gene has been identified as the major target for diversifying selection. Overall, less gene and protein variability was found within RSV-B compared to RSV-A and most protein variation between the subgroups was found in the F, G, SH and M2-2 proteins. For the F protein mutations and correlated amino acid changes are largely located in the F2 ligand-binding domain. The small hydrophobic phosphoprotein and nucleoprotein are the most conserved proteins. The evolutionary rates were similar in both subgroups (A: 6.47E-04, B: 7.76E-04 substitution/site/yr), but estimates of the time to the most recent common ancestor were much lower for RSV-B (B: 19, A: 46.8 yrs), indicating that there is more turnover in this subgroup. Conclusion: This study provides a detailed description of whole RSV genome mutations, the effect on translation products and the first estimate of the RSV genome evolution tempo. The immunogenic G protein seems to require high substitution rates in order to select less immunogenic strains and other conserved proteins are most likely essential to preserve RSV viability. The resulting G gene variability makes its protein a less interesting target for RSV intervention methods. The more conserved RSV F protein with less antigenic epitope shedding is, therefore, more suitable for developing therapeutic strategies or vaccines.

Keywords: drug target selection, epidemiology, respiratory syncytial virus, RSV

Procedia PDF Downloads 394
635 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital

Authors: Maryamsadat Habibi

Abstract:

Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.

Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department

Procedia PDF Downloads 62
634 Killing for the Great Peace: An Internal Perspective on the Anti-Manchu Theme in the Taiping Movement

Authors: Zihao He

Abstract:

The majority of existing studies on the Taiping Movement (1851-1864) viewed their anti-Manchu attitudes as nationalist agendas: Taiping was aimed at revolting against the Manchu government and establishing a new political regime. To explain these aggressive and violent attitudes towards Manchu, these studies mainly found socio-economic factors and stressed the status of “being deprived”. Even the ‘demon-slaying’ narrative of the Taiping to dehumanize the Manchu tends to be viewed as a “religious tool” to achieve their political, nationalist aim. This paper argues that these studies on Taiping’s anti-Manchu attitudes and behaviors are analyzed from an external angle and have two major problems. Firstly, they distinguished “religion” from “nationalist” or “political”, focusing on the “political” nature of the movement. “Religion” and the religious experience within Taiping were largely ignored. This paper argues that there was no separable and independent “religion” in the Taiping Movement, as opposed to secular, nationalist politics. Secondly, these analyses held an external perspective on Taiping’s anti-Manchu agenda. Demonizing and killing Manchu were viewed as purely political actions. On the contrary, this paper focuses on the internal perspective of anti-Manchu narratives in the Taiping Movement. The method of this paper is mainly textual analysis, focusing on the official documents, edicts, and proclamations of the Taiping movement. It views the writing of the Taiping as a coherent narrative and rhetoric, which was attractive and convincing for its followers. In terms of the main findings, firstly, internal and external perspectives on anti-Manchu violence are different. Externally, violence was viewed as a tool and necessary process to achieve the political goal. However, internally speaking, in Taiping’s writing, violence was a result of Godlessness, which would be solved as far as the faith in God is restored in China. Having a framework of universal love among human beings as sons and daughters of the Heavenly Father and killing was forbidden, the Taiping excluded Manchus from the family of human beings and demonized them. “Demon-slaying” was not violence. It was constructed as a necessary process to achieve the Great Peace. Moreover, Taiping’s anti-Manchu violence was not merely “political.” Rather, the category “religion” and its binary opposition, “secular,” is not suitable for Taiping. A key point related to this argument is the revolutionary violence against the Manchu government, which inherited the traditional “Heavenly Mandate” model. From an internal, theological perspective, anti-Manchu was ordained and commanded by the Heavenly Father. Manchu, as a regime, was standing as a hindrance in the path toward God. Besides, Manchu was not only viewed as a regime, but they were also “demons.” Therefore, the paper examines how Manchus were dehumanized in Taiping’s writings and were situated outside of the consideration of nonviolent and love. Manchu as a regime and Manchu as demons are in a dynamic relationship. As a regime, the Manchu government was preventing Chinese people from worshipping the Heavenly Father, so they were demonized. As they were demons, killing Manchus during the revolt was justified and not viewed as being contradicted the universal love among human beings.

Keywords: anti-manchu, demon-slaying, heavenly mandate, religion and violence, the taiping movement.

Procedia PDF Downloads 57
633 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil

Authors: Ribeiro Tiago Magalhães

Abstract:

Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.

Keywords: Amazon, hydroelectric, power, plants

Procedia PDF Downloads 173
632 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs

Authors: John Farr Rothrock

Abstract:

Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.

Keywords: migraine, female, libido, sexual activity, phenotype

Procedia PDF Downloads 64
631 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 270
630 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 379
629 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex

Authors: Lishuang Ma

Abstract:

Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexes

Keywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling

Procedia PDF Downloads 123