Search results for: communication technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6898

Search results for: communication technologies

388 Applying Push Notifications with Behavioral Change Strategies in Fitness Applications: A Survey of User's Perception Based on Consumer Engagement

Authors: Yali Liu, Maria Avello Iturriagagoitia

Abstract:

Background: Fitness applications (apps) are one of the most popular mobile health (mHealth) apps. These apps can help prevent/control health issues such as obesity, which is one of the most serious public health challenges in the developed world in recent decades. Compared with the traditional intervention like face-to-face treatment, it is cheaper and more convenient to use fitness apps to interfere with physical activities and healthy behaviors. Nevertheless, fitness applications apps tend to have high abandonment rates and low levels of user engagement. Therefore, maintaining the endurance of users' usage is challenging. In fact, previous research shows a variety of strategies -goal-setting, self-monitoring, coaching, etc.- for promoting fitness and health behavior change. These strategies can influence the users’ perseverance and self-monitoring of the program as well as favoring their adherence to routines that involve a long-term behavioral change. However, commercial fitness apps rarely incorporate these strategies into their design, thus leading to a lack of engagement with the apps. Most of today’s mobile services and brands engage their users proactively via push notifications. Push notifications. These notifications are visual or auditory alerts to inform mobile users about a wide range of topics that entails an effective and personal mean of communication between the app and the user. One of the research purposes of this article is to implement the application of behavior change strategies through push notifications. Proposes: This study aims to better understand the influence that effective use of push notifications combined with the behavioral change strategies will have on users’ engagement with the fitness app. And the secondary objectives are 1) to discuss the sociodemographic differences in utilization of push notifications of fitness apps; 2) to determine the impact of each strategy in customer engagement. Methods: The study uses a combination of the Consumer Engagement Theory and UTAUT2 based model to conduct an online survey among current users of fitness apps. The questionnaire assessed attitudes to each behavioral change strategy, and sociodemographic variables. Findings: Results show the positive effect of push notifications in the generation of consumer engagement and the different impacts of each strategy among different groups of population in customer engagement. Conclusions: Fitness apps with behavior change strategies have a positive impact on increasing users’ usage time and customer engagement. Theoretical experts can participate in designing fitness applications, along with technical designers.

Keywords: behavioral change, customer engagement, fitness app, push notification, UTAUT2

Procedia PDF Downloads 105
387 Association of Nuclear – Mitochondrial Epistasis with BMI in Type 1 Diabetes Mellitus Patients

Authors: Agnieszka H. Ludwig-Slomczynska, Michal T. Seweryn, Przemyslaw Kapusta, Ewelina Pitera, Katarzyna Cyganek, Urszula Mantaj, Lucja Dobrucka, Ewa Wender-Ozegowska, Maciej T. Malecki, Pawel Wolkow

Abstract:

Obesity results from an imbalance between energy intake and its expenditure. Genome-Wide Association Study (GWAS) analyses have led to discovery of only about 100 variants influencing body mass index (BMI), which explain only a small portion of genetic variability. Analysis of gene epistasis gives a chance to discover another part. Since it was shown that interaction and communication between nuclear and mitochondrial genome are indispensable for normal cell function, we have looked for epistatic interactions between the two genomes to find their correlation with BMI. Methods: The analysis was performed on 366 T1DM patients using Illumina Infinium OmniExpressExome-8 chip and followed by imputation on Michigan Imputation Server. Only genes which influence mitochondrial functioning (listed in Human MitoCarta 2.0) were included in the analysis – variants of nuclear origin (MAF > 5%) in 1140 genes and 42 mitochondrial variants (MAF > 1%). Gene expression analysis was performed on GTex data. Association analysis between genetic variants and BMI was performed with the use of Linear Mixed Models as implemented in the package 'GENESIS' in R. Analysis of association between mRNA expression and BMI was performed with the use of linear models and standard significance tests in R. Results: Among variants involved in epistasis between mitochondria and nucleus we have identified one in mitochondrial transcription factor, TFB2M (rs6701836). It interacted with mitochondrial variants localized to MT-RNR1 (p=0.0004, MAF=15%), MT-ND2 (p=0.07, MAF=5%) and MT-ND4 (p=0.01, MAF=1.1%). Analysis of the interaction between nuclear variant rs6701836 (nuc) and rs3021088 localized to MT-ND2 mitochondrial gene (mito) has shown that the combination of the two led to BMI decrease (p=0.024). Each of the variants on its own does not correlate with higher BMI [p(nuc)=0.856, p(mito)=0.116)]. Although rs6701836 is intronic, it influences gene expression in the thyroid (p=0.000037). rs3021088 is a missense variant that leads to alanine to threonine substitution in the MT-ND2 gene which belongs to complex I of the electron transport chain. The analysis of the influence of genetic variants on gene expression has confirmed the trend explained above – the interaction of the two genes leads to BMI decrease (p=0.0308). Each of the mRNAs on its own is associated with higher BMI (p(mito)=0.0244 and p(nuc)=0.0269). Conclusıons: Our results show that nuclear-mitochondrial epistasis can influence BMI in T1DM patients. The correlation between transcription factor expression and mitochondrial genetic variants will be subject to further analysis.

Keywords: body mass index, epistasis, mitochondria, type 1 diabetes

Procedia PDF Downloads 151
386 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 109
385 MAFB Expression in LPS-Induced Exosomes: Revealing the Connection to sepsis-trigerred Hepatic Injury

Authors: Gizaw Mamo Gebeyehu, Marianna Pap, Geza Makkai, Tibor Z. Janosi, Shima Rashidian, Tibor A. Rauch

Abstract:

Sepsis poses a significant global health threat, necessitating extensive exploration of indicators tied to its pathological mechanisms and multi-organ dysfunction. While murine studies have shed light on sepsis, the intricate cellular and molecular landscape in human sepsis remains enigmatic. Exploring the influence of activated monocyte-derived exosomes in sepsis sheds light on a promising pathway for understanding the intricate cellular and molecular mechanisms involved in this condition in humans. In sepsis, exosome-borne mRNA and miRNA orchestrate immune response gene expression in recipient cells. Yet, the specifics of exosome-mediated cell-to-cell communication, especially how mRNA cargoes modulate gene expression in recipient cells, remain poorly understood. This study aims to elucidate the precise molecular pathways through which exosomal mRNA cargo, particularly MAFB, contributes to the developing sepsis-induced molecular aberrations in liver tissues, employing rigorously defined cell culture conditions. THP-1 cells were treated with LPS to induce changes in exosomal RNA profiles. Exosomes were isolated and characterized using microscopy and mass spectrometry. RNA was extracted from exosomes and sequenced. The most abundant exosomal mRNAs were subjected to GO analysis for functional annotation analysis and KEGG database analysis to identify the involved enriched pathways. PCR (Polymerase Chain Reaction), RNA sequencing, and Western blotting were involved to analyze changes in gene expression, protein levels, and signaling pathways within the liver cells( HepG2) after exposure to exosomal MAFB. This study pinpoints exosomal MAFB as a potential key regulator linked to liver cell damage during sepsis, along with associated genes (miR155HG, H3F3A, and possibly JARD2) forming a crucial molecular pathway contributing to liver cell injury, Together, these elements indicate a vital molecular pathway that plays a significant role in the emergence of liver cell injury during sepsis.. These findings suggest the importance of further research on these components for potential therapeutic interventions in managing acute liver damage in sepsis.

Keywords: sepsis, exososome, exosomal MAFB, LPS-induced THP-1 cells, RNA profiles, sepsis-triggered liver injury

Procedia PDF Downloads 36
384 Working Conditions and Occupational Health: Analyzing the Stressing Factors in Outsourced Employees

Authors: Cledinaldo A. Dias, Isabela C. Santos, Marcus V. S. Siqueira

Abstract:

In the contemporary globalization, the competitiveness generated in the search of new markets aiming at the growth of productivity and, consequently, of profits, implies the redefinition of productive processes and new forms of work organization. As a result of this structuring, unemployment, labor force turnover and the increase in outsourcing and informal work occur. Considering the different relationships and working conditions of outsourced employees, this study aims to identify the most present stressors among outsourced service providers from a Federal Institution of Higher Education in Brazil. To reach this objective, a descriptive exploratory study with a quantitative approach was carried out. The qualitative approach was chosen to provide an in-depth analysis of the occupational conditions of outsourced workers since this method seeks to focus on the social as a world of investigated meanings and the language or speech of each subject as the object of this approach. The survey was conducted in the city of Montes Claros - Minas Gerais (Brazil) and involved eighty workers from companies hired by the institution, including armed security guards, porters, cleaners, drivers, gardeners, and administrative assistants. The choice of professionals obeyed non-probabilistic criteria for convenience or accessibility. Data collection was performed by means of a structured questionnaire composed of sixty questions, in a Likert-type frequency interval scale format, in order to identify potential organizational stressors. The results obtained evidence that the stress factors pointed out by the workers are, in most cases, a determining factor due to the low productive performance at work. Amongst the factors associated with stress, the ones that stood out most were those related to organizational communication failures, the incentive to competition, lack of expectations of professional growth, insecurity and job instability. Based on the results, the need for greater concern and organizational responsibility with the well-being and mental health of the outsourced worker and the recognition of their physical and psychological limitations, and care that goes beyond the functional capacity for the work. Specifically for the preservation of mental health, physical and quality of life, it is concluded that it is necessary for the professional to be inserted in the external world that favors it internally since this set is complemented so that the individual remains in balance and obtain satisfaction in your work.

Keywords: occupational health, outsourced, organizational studies, stressors

Procedia PDF Downloads 76
383 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU

Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman

Abstract:

Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.

Keywords: care, ethics, expertise, NICU, paternalism

Procedia PDF Downloads 114
382 Obesity and Cancer: Current Scientific Evidence and Policy Implications

Authors: Martin Wiseman, Rachel Thompson, Panagiota Mitrou, Kate Allen

Abstract:

Since 1997 World Cancer Research Fund (WCRF) International and the American Institute for Cancer Research (AICR) have been at the forefront of synthesising and interpreting the accumulated scientific literature on the link between diet, nutrition, physical activity and cancer, and deriving evidence-based Cancer Prevention Recommendations. The 2007 WCRF/AICR 2nd Expert Report was a landmark in the analysis of evidence linking diet, body weight and physical activity to cancer and led to the establishment of the Continuous Update Project (CUP). In 2018, as part of the CUP, WCRF/AICR will publish a new synthesis of the current evidence and update the Cancer Prevention Recommendations. This will ensure that everyone - from policymakers and health professionals to members of the public - has access to the most up-to-date information on how to reduce the risk of developing cancer. Overweight and obesity play a significant role in cancer risk, and rates of both are increasing in many parts of the world. This session will give an overview of new evidence relating obesity to cancer since the 2007 report. For example, since the 2007 Report, the number of cancers for which obesity is judged to be a contributory cause has increased from seven to eleven. The session will also shed light on the well-established mechanisms underpinning obesity and cancer links. Additionally, the session will provide an overview of diet and physical activity related factors that promote positive energy imbalance, leading to overweight and obesity. Finally, the session will highlight how policy can be used to address overweight and obesity at a population level, using WCRF International’s NOURISHING Framework. NOURISHING formalises a comprehensive package of policies to promote healthy diets and reduce obesity and non-communicable diseases; it is a tool for policymakers to identify where action is needed and assess if an approach is sufficiently comprehensive. The framework brings together ten policy areas across three domains: food environment, food system, and behaviour change communication. The framework is accompanied by a regularly updated database providing an extensive overview of implemented government policy actions from around the world. In conclusion, the session will provide an overview of obesity and cancer, highlighting the links seen in the epidemiology and exploring the mechanisms underpinning these, as well as the influences that help determine overweight and obesity. Finally, the session will illustrate policy approaches that can be taken to reduce overweight and obesity worldwide.

Keywords: overweight, obesity, nutrition, cancer, mechanisms, policy

Procedia PDF Downloads 130
381 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars

Authors: Ankit Khurana

Abstract:

The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.

Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum

Procedia PDF Downloads 386
380 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 142
379 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach

Authors: Marcelo Pereira Introini

Abstract:

The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.

Keywords: global value chains, innovation systems, intellectual monopoly, technological development

Procedia PDF Downloads 52
378 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process

Authors: Ed de Jong

Abstract:

Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.

Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability

Procedia PDF Downloads 129
377 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather

Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour

Abstract:

The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati Tropical

Keywords: energyplus, multi-layer of PCM, phase changing materials, tropical area

Procedia PDF Downloads 69
376 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy

Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos

Abstract:

The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.

Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays

Procedia PDF Downloads 148
375 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method

Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren

Abstract:

In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.

Keywords: floating body, fluid structure interaction, MPS, particle method, waves

Procedia PDF Downloads 47
374 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development

Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo

Abstract:

Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.

Keywords: maritime training, oil spill response, simulation, vessel manoeuvring

Procedia PDF Downloads 146
373 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 221
372 Contribution at Dimensioning of the Energy Dissipation Basin

Authors: M. Aouimeur

Abstract:

The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.

Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment

Procedia PDF Downloads 562
371 A Profile of the Patients at the Hearing and Speech Clinic at the University of Jordan: A Retrospective Study

Authors: Maisa Haj-Tas, Jehad Alaraifi

Abstract:

The significance of the study: This retrospective study examined the speech and language profiles of patients who received clinical services at the University of Jordan Hearing and Speech Clinic (UJ-HSC) from 2009 to 2014. The UJ-HSC clinic is located in the capital Amman and was established in the late 1990s. It is the first hearing and speech clinic in Jordan and one of first speech and hearing clinics in the Middle East. This clinic provides services to an annual average of 2000 patients who are diagnosed with different communication disorders. Examining the speech and language profiles of patients in this clinic could provide an insight about the most common disorders seen in patients who attend similar clinics in Jordan. It could also provide information about community awareness of the role of speech therapists in the management of speech and language disorders. Methodology: The researchers examined the clinical records of 1140 patients (797 males and 343 females) who received clinical services at the UJ-HSC between the years 2009 and 2014 for the purpose of data analysis for this study. The main variables examined in the study were disorder type and gender. Participants were divided into four age groups: children, adolescents, adults, and older adults. The examined disorders were classified as either speech disorders, language disorders, or dysphagia (i.e., swallowing problems). The disorders were further classified as childhood language impairments, articulation disorders, stuttering, cluttering, voice disorders, aphasia, and dysphagia. Results: The results indicated that the prevalence for language disorders was the highest (50.7%) followed by speech disorders (48.3%), and dysphagia (0.9%). The majority of patients who were seen at the JU-HSC were diagnosed with childhood language impairments (47.3%) followed consecutively by articulation disorders (21.1%), stuttering (16.3%), voice disorders (12.1%), aphasia (2.2%), dysphagia (0.9%), and cluttering (0.2%). As for gender, the majority of patients seen at the clinic were males in all disorders except for voice disorders and cluttering. Discussion: The results of the present study indicate that the majority of examined patients were diagnosed with childhood language impairments. Based on this result, the researchers suggest that there seems to be a high prevalence of childhood language impairments among children in Jordan compared to other types of speech and language disorders. The researchers also suggest that there is a need for further examination of the actual prevalence data on speech and language disorders in Jordan. The fact that many of the children seen at the UJ-HSC were brought to the clinic either as a result of parental concern or teacher referral indicates that there seems to an increased awareness among parents and teachers about the services speech pathologists can provide about assessment and treatment of childhood speech and language disorders. The small percentage of other disorders (i.e., stuttering, cluttering, dysphasia, aphasia, and voice disorders) seen at the UJ-HSC may indicate a little awareness by the local community about the role of speech pathologists in the assessment and treatment of these disorders.

Keywords: clinic, disorders, language, profile, speech

Procedia PDF Downloads 293
370 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 245
369 Green Synthesis of Silver and Silver-Gold Alloy Nanoparticle Using Cyanobacteria as Bioreagent

Authors: Piya Roychoudhury, Ruma Pal

Abstract:

Cyanobacteria, commonly known as blue green algae were found to be an effective bioreagent for nanoparticle synthesis. Nowadays silver nanoparticles (AgNPs) are very popular due to their antimicrobial and anti-proliferative activity. To exploit these characters in different biotechnological fields, it is very essential to synthesize more stable, non-toxic nano-silver. For this reason silver-gold alloy (Ag-AuNPs) nanoparticles are of great interest as they are more stable, harder and more effective than single metal nanoparticles. In the present communication we described a simple technique for rapid synthesis of biocompatible AgNP and Ag-AuNP employing cyanobacteria, Leptolyngbya and Lyngbya respectively. For synthesis of AgNP the biomass of Leptolyngbya valderiana (200 mg Fresh weight) was exposed to 9 mM AgNO3 solution (pH 4). For synthesis of Ag-AuNP Lyngbya majuscula (200 mg Fresh weight) was exposed to equimolar solution of hydrogen tetra-auro chlorate and silver nitrate (1mM, pH 4). After 72 hrs of exposure thallus of Leptolyngyba turned brown in color and filaments of Lyngbya turned pink in color that indicated synthesis of nanoparticles. The produced particles were extracted from the cyanobacterial biomass using nano-capping agent, sodium citrate. Firstly, extracted brown and pink suspensions were taken for Energy Dispersive X-ray (EDAX) analysis to confirm the presence of silver in brown suspension and presence of both gold and silver in pink suspension. Extracted nanoparticles showed a distinct single plasmon band (AgNP at 411 nm; Ag-Au NP at 481 nm) in Uv-vis spectroscopy. It was revealed from Transmission electron microscopy (TEM) that all the synthesized particles were spherical in nature with a size range of ~2-25 nm. In X-ray powder diffraction (XRD) analysis four intense peaks appeared at 38.2°, 44.5°, 64.8°and 77.8° which confirmed the crystallographic nature of synthesized particles. Presence of different functional groups viz. N-H, C=C, C–O, C=O on the surface of nanoparticles were recorded by Fourier transform infrared spectroscopy (FTIR). Scanning Electron microscopy (SEM) images showed the surface topography of metal treated filaments of cyanobacteria. The stability of the particles was observed by Zeta potential study. Antibiotic property of synthesized particles was tested by Agar well diffusion method against gram negative bacteria Pseudomonas aeruginosa. Overall, this green-technique requires low energy, less manufacturing cost and produces rapidly eco-friendly metal nanoparticles.

Keywords: cyanobacteria, silver nanoparticles, silver-gold alloy nanoparticles, spectroscopy

Procedia PDF Downloads 296
368 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 308
367 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort

Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri

Abstract:

Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.

Keywords: early treatment, fluency, preschool children, stuttering

Procedia PDF Downloads 188
366 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations

Authors: Rosni Lakandri

Abstract:

With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.

Keywords: China, foreign policy, India, media, public opinion

Procedia PDF Downloads 132
365 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 293
364 The Impact of Brand Hate and Love: A Thematic Analysis of Online Emotions in Response to Disney’s Corporate Activism

Authors: Roxana D. Maiorescu-Murphy

Abstract:

Companies have recently embraced political activism as an alleged responsibility toward the communities they operate in. As a result of its recency, there is little understanding of the impact of corporate activism on consumers. In addition, embracing corporate activism engenders polarizing opinions, potentially leading to a crisis of morality shown in past literature to flourish in online settings. The present study contributes to the literature on communication management, which currently lacks research on stakeholder perceptions toward corporate activism in general and from the perspective of the stakeholders’ emotions of brand hate versus a love that they display before a specific corporate act of activism. For this purpose, the study analyzed online reactions on Twitter following Disney’s stance against Florida’s House Bill 1577 enacted in April 2022. Dubbed the “Don’t Say Gay Bill” by the left wing and the “Parental Rights Bill” by the conservative movement, the legislation triggered polarizing opinions in society and among Disney’s stakeholders, as the company announce it was taking action against it. Given the scarcity of research on corporate political activism and crises of morality, the current study enacted the case study methodology. Consequently, it answered to the research questions of how online stakeholders responded to Disney’s stance as well as why they formed such an opinion. The data were collected from Twitter over a seven-day period of analysis, namely from March 28- April 3, 2022. The period of analysis started on the day Disney announced its stance (March 28, 2022) until the reactions to its announcement petered out significantly (April 3, 2022). The final sample of analysis consisted of N=1,344 and represented Twitter comments in response to the company’s political announcement. The data were analyzed using the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach that led to the emergence of several recurrent themes. The findings revealed that the stakeholders’ prior emotions toward the company (brand hate versus brand love) did not play a greater role in their (dis)agreement with the latter’s activism than the users’ political stances. Specifically, whether they despised or hated Disney prior to this incident was less significant than their personal political stances. Above all, users were more inclined to transition from brand love to brand hate and vice versa based on the political side they viewed Disney to fall under.

Keywords: corporate political advocacy, crisis management, brand hate, brand love

Procedia PDF Downloads 81
363 Safety of Implementation the Gluten - Free Diet in Children with Autism Spectrum Disorder

Authors: J. Jessa

Abstract:

Background: Autism is a pervasive developmental disorder, the incidence of which has significantly increased in recent years. Children with autism have impairments in social skills, communication, and imagination. Children with autism has more common than healthy children feeding problems: food selectivity, problems with gastrointestinal tract: diarrhea, constipations, abdominal pain, reflux and others. Many parents of autistic children report that after implementation of gluten-, casein- and sugar free diet those symptoms disappear and even cognitive functions become better. Some children begin to understand speech and to communicate with parents, regain eye contact, become more calm, sleep better and has better concentration. Probably at the root of this phenomenon lies elimination from the diet peptides construction of which is similar to opiates. Enhanced permeability of gut causes absorption of not fully digested opioid-like peptides from food, like gluten and casein and probably others (proteins from soy and corn) which impact on brain of autistic children. Aim of the study: The aim of the study is to assess the safety of gluten-free diet in children with autism, aged 2,5-7. Methods: Participants of the study (n=70) – children aged 2,5-7 with autism are divided into 3 groups. The first group (research group) are patients whose parents want to implement a gluten-free diet. The second group are patients who have been recommended to eliminate from the diet artificial substances, such as preservatives, artificial colors and flavors, and others (control group 1). The third group (control group 2) are children whose parents did not agree for implementation of the diet. Caregivers of children on the diet are educated about the specifics of the diet and how to avoid malnutrition. At the start of the study we exclude celiac disease. Before the implementation of the diet we performe a blood test for patients (morphology, ferritin, total cholesterol, dry peripheral blood drops to detect some genetic metabolic diseases), plasma aminogram) and urine tests (excretion of ions: Mg, Na, Ca, the profile of organic acids in urine), which assess nutritional status as well as the psychological test assessing the degree of the child's psychological functioning (PEP-R). All of these tests will be repeated after one year from the implementation of the diet. Results: To the present moment we examined 42 children with autism. 12 of children are on gluten- free diet. Our preliminary results are promising. Parents of 9 of them report that, there is a big improvement in child behavior, concentration, less aggression incidents, better eye contact and better verbal skills. Conclusion: Our preliminary results suggest that dietary intervention may positively affect developmental outcome for some children diagnosed with ASD.

Keywords: gluten free diet, autism spectrum disorder, autism, blood test

Procedia PDF Downloads 303
362 The Second Column of Origen’s Hexapla and the Transcription of BGDKPT Consonants: A Confrontation with Transliterated Hebrew Names in Greek Documents

Authors: Isabella Maurizio

Abstract:

This research analyses the pronunciation of Hebrew consonants 'bgdkpt' in II- III C. E. in Palestine, through the confrontation of two kinds of data: the fragments of transliteration of Old Testament in the Greek alphabet, from the second column of Origen’s synopsis, called Hexapla, and Hebrew names transliterated in Greek documents, especially epigraphs. Origen is a very important author, not only for his bgdkpt theological and exegetic works: the Hexapla, synoptic six columns for a critical edition of Septuaginta, has a relevant role in attempting to reconstruct the pronunciation of Hebrew language before Masoretic punctuation. For this reason, at the beginning, it is important to analyze the column in order to study phonetic and linguistic phenomena. Among the most problematic data, there is the evidence from bgdkpt consonants, always represented as Greek aspirated graphemes. This transcription raised the question if their pronunciation was the only spirant, and consequently, the double one, that is, the stop/spirant contrast, was introduced by Masoretes. However, the phonetic and linguistic examination of the column alone is not enough to establish a real pronunciation of language: this paper is significant because a confrontation between the second column’s transliteration and Hebrew names found in Greek documents epigraphic ones mainly, is achieved. Palestine in II - III was a bilingual country: Greek and Aramaic language lived together, the first one like the official language, the second one as the principal mean of communication between people. For this reason, Hebrew names are often found in Greek documents of the same geographical area: a deep examination of bgdkpt’s transliteration can help to understand better which the real pronunciation of these consonants was, or at least it allows to evidence a phonetic tendency. As a consequence, the research considers the contemporary documents to Origen and the previous ones: the first ones testify a specific stadium of pronunciation, the second ones reflect phonemes’ evolution. Alexandrian documents are also examined: Origen was from there, and the influence of Greek language, spoken in his native country, must be considered. The epigraphs have another implication: they are totally free from morphological criteria, probably used by Origen in his column, because of their popular origin. Thus, a confrontation between the hexaplaric transliteration and Hebrew names is absolutely required, in Hexapla’s studies: first of all, it can be the second clue of a pronunciation already noted in the column; then because, for documents’ specific nature, it has more probabilities to be real, reflecting a daily use of language. The examination of data shows a general tendency to employ the aspirated graphemes for bgdkpt consonants’ transliteration. This probably means that they were closer to Greek aspirated consonants rather than to the plosive ones. The exceptions are linked to a particular status of the name, i.e. its history and origin. In this way, this paper gives its contribution to onomastic studies, too: indeed, the research may contribute to verify the diffusion and the treatment of Jewish names in Hellenized world and in the koinè language.

Keywords: bgdkpt consonants, Greek epigraphs, Jewish names, origen's Hexapla

Procedia PDF Downloads 113
361 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation

Authors: Hortènsia Curell, Ana Fernández-Montraveta

Abstract:

This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.

Keywords: academic writing, attitude, corpus, english

Procedia PDF Downloads 40
360 Pre- and Post-Brexit Experiences of the Bulgarian Working Class Migrants: Qualitative and Quantitative Approaches

Authors: Mariyan Tomov

Abstract:

Bulgarian working class immigrants are increasingly concerned with UK’s recent immigration policies in the context of Brexit. The new ID system would exclude many people currently working in Britain and would break the usual immigrant travel patterns. Post-Brexit Britain would aim to repeal seasonal immigrants. Measures for keeping long-term and life-long immigrants have been implemented and migrants that aim to remain in Britain and establish a household there would be more privileged than temporary or seasonal workers. The results of such regulating mechanisms come at the expense of migrants’ longings for a ‘normal’ existence, especially for those coming from Central and Eastern Europe. Based on in-depth interviews with Bulgarian working class immigrants, the study found out that their major concerns following the decision of the UK to leave the EU are related with the freedom to travel, reside and work in the UK. Furthermore, many of the interviewed women are concerned that they could lose some of the EU's fundamental rights, such as maternity and protection of pregnant women from unlawful dismissal. The soar of commodity prices and university fees and the limited access to public services, healthcare and social benefits in the UK, are also subject to discussion in the paper. The most serious problem, according to the interview, is that the attitude towards Bulgarians and other immigrants in the UK is deteriorating. Both traditional and social media in the UK often portray the migrants negatively by claiming that they take British job positions while simultaneously abuse the welfare system. As a result, the Bulgarian migrants often face social exclusion, which might have negative influence on their health and welfare. In this sense, some of the interviewed stress on the fact that the most important changes after Brexit must take place in British society itself. The aim of the proposed study is to provide a better understanding of the Bulgarian migrants’ economic, health and sociocultural experience in the context of Brexit. Methodologically, the proposed paper leans on: 1. Analysing ethnographic materials dedicated to the pre- and post-migratory experiences of Bulgarian working class migrants, using SPSS. 2. Semi-structured interviews are conducted with more than 50 Bulgarian working class migrants [N > 50] in the UK, between 18 and 65 years. The communication with the interviewees was possible via Viber/Skype or face-to-face interaction. 3. The analysis is guided by theoretical frameworks. The paper has been developed within the framework of the research projects of the National Scientific Fund of Bulgaria: DCOST 01/25-20.02.2017 supporting COST Action CA16111 ‘International Ethnic and Immigrant Minorities Survey Data Network’.

Keywords: Bulgarian migrants in UK, economic experiences, sociocultural experiences, Brexit

Procedia PDF Downloads 95
359 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast

Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi

Abstract:

The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.

Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education

Procedia PDF Downloads 152