Search results for: embedded sensors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2205

Search results for: embedded sensors

405 A Study of Predicting Judgments on Causes of Online Privacy Invasions: Based on U.S Judicial Cases

Authors: Minjung Park, Sangmi Chai, Myoung Jun Lee

Abstract:

Since there are growing concerns on online privacy, enterprises could involve various personal privacy infringements cases resulting legal causations. For companies that are involving online business, it is important for them to pay extra attentions to protect users’ privacy. If firms can aware consequences from possible online privacy invasion cases, they can more actively prevent future online privacy infringements. This study attempts to predict the probability of ruling types caused by various invasion cases under U.S Personal Privacy Act. More specifically, this research explores online privacy invasion cases which was sentenced guilty to identify types of criminal punishments such as penalty, imprisonment, probation as well as compensation in civil cases. Based on the 853 U.S judicial cases ranged from January, 2000 to May, 2016, which related on data privacy, this research examines the relationship between personal information infringements cases and adjudications. Upon analysis results of 41,724 words extracted from 853 regal cases, this study examined online users’ privacy invasion cases to predict the probability of conviction for a firm as an offender in both of criminal and civil law. This research specifically examines that a cause of privacy infringements and a judgment type, whether it leads a civil or criminal liability, from U.S court. This study applies network text analysis (NTA) for data analysis, which is regarded as a useful method to discover embedded social trends within texts. According to our research results, certain online privacy infringement cases caused by online spamming and adware have a high possibility that firms are liable in the case. Our research results provide meaningful insights to academia as well as industry. First, our study is providing a new insight by applying Big Data analytics to legal cases so that it can predict the cause of invasions and legal consequences. Since there are few researches applying big data analytics in the domain of law, specifically in online privacy, this study suggests new area that future studies can explore. Secondly, this study reflects social influences, such as a development of privacy invasion technologies and changes of users’ level of awareness of online privacy on judicial cases analysis by adopting NTA method. Our research results indicate that firms need to improve technical and managerial systems to protect users’ online privacy to avoid negative legal consequences.

Keywords: network text analysis, online privacy invasions, personal information infringements, predicting judgements

Procedia PDF Downloads 204
404 Sexual Diversity Training for Hong Kong Teachers Preliminary Themes Identified from Qualitative Interviews

Authors: Diana K. Kwok

Abstract:

Despite the fact that Hong Kong government aims to develop an inclusive society, sexual minority students continue to encounter sexual prejudice without legal protection. They also have difficulties accessing relevant services from mental health and educational professionals, who do not receive systematic training to work with sexual minority students. Informed by the literature on sexual prejudice, heterosexual hegemony, genderism, as well as code of practice for frontline practitioners, the authors explored self-perceived knowledge of teachers and sexual minorities on sexuality and sexual prejudice, and how they perceive prejudice towards sexual minorities in Chinese cultural context. Semi-structure qualitative interviews were carried out with 31 school personnel informants (school teachers and counseling team members) and 25 sexual minority informants on their understanding of sexuality knowledge, their perception of sexual prejudice within school context in Hong Kong, as well as their suggested themes on teachers training on sexual prejudice reduction. This presentation specifically focuses on transcripts from sexual minority informants. Data analysis was carried out through NVivo, and followed the procedures spelt out in the qualitative research literature. Trustworthiness of the study was addressed through various strategies. Preliminary themes emerged from transcript content analysis: 1) A gap of knowledge between sexual minority informants and teachers; 2) Perception on sexual prejudice within cultural context; 3) Heterosexual hegemony and genderism within school system; 4) Needs for mandatory training: contents and strategies. The sexual minority informants found that teachers they encountered were predominantly adopted concepts of binary sex and dichotomous gender. Informants also indicated that the teachings of Confucianism cultural values, religiosity in Hong Kong might well be important cultural forces contributing to sexual prejudice manifested in school context. Although human rights and social justice concepts were embedded in professional code of practice of teachers and school helping professionals, informants found that teachers they encountered may face a dilemma when supporting sexual minority students navigating heterosexual hegemony and genderism in, as a consequence of their personal, institutional, cultural and religious backgrounds. Acknowledgments: The sexual prejudice project was funded by the Hong Kong Research Grant Council (ECS28401614), 2015 to 2017.

Keywords: sexual prejudice, Chinese teachers, Chinese sexual minorities, teacher training

Procedia PDF Downloads 257
403 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool

Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung

Abstract:

High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.

Keywords: machining parameters, machining stability, regression analysis, surface roughness

Procedia PDF Downloads 206
402 Realization and Characterizations of Conducting Ceramics Based on ZnO Doped by TiO₂, Al₂O₃ and MgO

Authors: Qianying Sun, Abdelhadi Kassiba, Guorong Li

Abstract:

ZnO with wurtzite structure is a well-known semiconducting oxide (SCO), being applied in thermoelectric devices, varistors, gas sensors, transparent electrodes, solar cells, liquid crystal displays, piezoelectric and electro-optical devices. Intrinsically, ZnO is weakly n-type SCO due to native defects (Znⱼ, Vₒ). However, the substitutional doping by metallic elements as (Al, Ti) gives rise to a high n-type conductivity ensured by donor centers. Under CO+N₂ sintering atmosphere, Schottky barriers of ZnO ceramics will be suppressed by lowering the concentration of acceptors at grain boundaries and then inducing a large increase in the Hall mobility, thereby increasing the conductivity. The presented work concerns ZnO based ceramics, which are fabricated with doping by TiO₂ (0.50mol%), Al₂O₃ (0.25mol%) and MgO (1.00mol%) and sintering in different atmospheres (Air (A), N₂ (N), CO+N₂(C)). We obtained uniform, dense ceramics with ZnO as the main phase and Zn₂TiO₄ spinel as a secondary and minor phase. An important increase of the conductivity was shown for the samples A, N, and C which were sintered under different atmospheres. The highest conductivity (σ = 1.52×10⁵ S·m⁻¹) was obtained under the reducing atmosphere (CO). The role of doping was investigated with the aim to identify the local environment and valence states of the doping elements. Thus, Electron paramagnetic spectroscopy (EPR) determines the concentration of defects and the effects of charge carriers in ZnO ceramics as a function of the sintering atmospheres. The relation between conductivity and defects concentration shows the opposite behavior between these parameters suggesting that defects act as traps for charge carriers. For Al ions, nuclear magnetic resonance (NMR) technique was used to identify the involved local coordination of these ions. Beyond the six and forth coordinated Al, an additional NMR signature of ZnO based TCO requires analysis taking into account the grain boundaries and the conductivity through the Knight shift effects. From the thermal evolution of the conductivity as a function of the sintering atmosphere, we succeed in defining the conditions to realize ZnO based TCO ceramics with an important thermal coefficient of resistance (TCR) which is promising for electrical safety of devices.

Keywords: ceramics, conductivity, defects, TCO, ZnO

Procedia PDF Downloads 163
401 Investigating the Effect of Metaphor Awareness-Raising Approach on the Right-Hemisphere Involvement in Developing Japanese Learners’ Knowledge of Different Degrees of Politeness

Authors: Masahiro Takimoto

Abstract:

The present study explored how the metaphor awareness-raising approach affects the involvement of the right hemisphere in developing EFL learners’ knowledge regarding the different degrees of politeness embedded within different request expressions. The present study was motivated by theoretical considerations regarding the conceptual projection and the metaphorical idea of politeness is distance, as proposed; this study applied these considerations to develop Japanese learners’ knowledge regarding the different politeness degrees and to explore the connection between the metaphorical concept projection and right-hemisphere dominance. Japanese EFL learners do not know certain language strategies (e.g., English requests can be mitigated with biclausal downgraders, including the if-clause with past-tense modal verbs) and have difficulty adjusting the politeness degrees attached to request expressions according to situations. The present study used a pre/post-test design to reaffirm the efficacy of the cognitive technique and its connection to right-hemisphere involvement by mouth asymmetry technique. Mouth asymmetry measurement has been utilized because speech articulation, normally controlled mainly by one side of the brain, causes muscles on the opposite side of the mouth to move more during speech production. The present research did not administer the delayed post-test because it emphasized determining whether metaphor awareness-raising approaches for developing EFL learners’ pragmatic proficiency entailed right-hemisphere activation. Each test contained an acceptability judgment test (AJT) along with a speaking test in the post-test. The study results show that the metaphor awareness-raising group performed significantly better than the control group with regard to acceptability judgment and speaking tests post-test. These data revealed that the metaphor awareness-raising approach could promote L2 learning because it aided input enhancement and concept projection; through these aspects, the participants were able to comprehend an abstract concept: the degree of politeness in terms of the spatial concept of distance. Accordingly, the proximal-distal metaphor enabled the study participants to connect the newly spatio-visualized concept of distance to the different politeness degrees attached to different request expressions; furthermore, they could recall them with the left side of the mouth being wider than the right. This supported certain findings from previous studies that indicated the possible involvement of the brain's right hemisphere in metaphor processing.

Keywords: metaphor awareness-raising, right hemisphere, L2 politeness, mouth asymmetry

Procedia PDF Downloads 124
400 Lung Tissue Damage under Diesel Exhaust Exposure: Modification of Proteins, Cells and Functions in Just 14 Days

Authors: Ieva Bruzauskaite, Jovile Raudoniute, Karina Poliakovaite, Danguole Zabulyte, Daiva Bironaite, Ruta Aldonyte

Abstract:

Introduction: Air pollution is a growing global problem which has been shown to be responsible for various adverse health outcomes. Immunotoxicity, such as dysregulated inflammation, has been proposed as one of the main mechanisms in air pollution-associated diseases. Chronic obstructive pulmonary disease (COPD) is among major morbidity and mortality causes worldwide and is characterized by persistent airflow limitation caused by the small airways disease (obstructive bronchiolitis) and irreversible parenchymal destruction (emphysema). Exact pathways explaining the air pollution induced and mediated disease states are still not clear. However, modern societies understand dangers of polluted air, seek to mitigate such effects and are in need for reliable biomarkers of air pollution. We hypothesise that post-translational modifications of structural proteins, e.g. citrullination, might be a good candidate biomarker. Thus, we have designed this study, where mice were exposed to diesel exhaust and the ongoing protein modifications and inflammation in lungs and other tissues were assessed. Materials And Methods: To assess the effects of diesel exhaust a in vivo study was designed. Mice (n=10) were subjected to everyday 2-hour exposure to diesel exhaust for 14 days. Control mice were treated the same way without diesel exhaust. The effects within lung and other tissues were assessed by immunohistochemistry of formalin-fixed and paraffin-embedded tissues. Levels of inflammation and citrullination related markers were investigated. Levels of parenchymal damage were also measured. Results: In vivo study corroborates our own data from in vitro and reveals diesel exhaust initiated inflammatory shift and modulation of lung peptidyl arginine deiminase 4 (PAD4), citrullination associated enzyme, levels. In addition, high levels of citrulline were observed in exposed lung tissue sections co-localising with increased parenchymal destruction. Conclusions: Subacute exposure to diesel exhaust renders mice lungs inflammatory and modifies certain structural proteins. Such structural changes of proteins may pave a pathways to lost/gain function of affected molecules and also propagate autoimmune processes within the lung and systemically.

Keywords: air pollution, citrullination, in vivo, lungs

Procedia PDF Downloads 120
399 Student Authenticity: A Foundation for First-Year Experience Courses

Authors: Amy L. Smith

Abstract:

This study investigates the impact of student authenticity while engaging in academic exploration of students' sense of belonging, autonomy, and persistence. Research questions include: How does incorporating authenticity in first-year academic exploration courses impact; 1) first-year students’ sense of belonging, autonomy, and persistence? 2) first-year students’ sense of belonging, autonomy, and persistence during the first and last halves of the fall semester? 3) first-year students’ sense of belonging, autonomy, and persistence among various student demographics? First-year students completed a Likert-like survey at the conclusion of eight weeks (first and last eight weeks/fall semester) academic exploration courses. Course redesign included grounding the curriculum and instruction with student authenticity and creating opportunities for students to explore, define, and reflect upon their authenticity during academic exploration. Surveys were administered at the conclusion of these eight week courses (first and last eight weeks/fall semester). Data analysis included an entropy balancing matching method and t-tests. Research findings indicate integrating authenticity into academic exploration courses for first-year students has a positive impact on students' autonomy and persistence. There is a significant difference between authenticity and first-year students' autonomy (p = 0.00) and persistence (p = 0.01). Academic exploration courses with the underpinnings of authenticity are more effective in the second half of the fall semester. There is a significant difference between an academic exploration course grounding the curriculum and instruction in authenticity offered M8A (first half, fall semester) and M8B (second half, fall semester) (p = 0); M8B courses illustrate an increase of students' sense of belonging, autonomy, and persistence. Integrating authenticity into academic exploration courses for first-year students has a positive impact on varying student demographics (p = 0.00). There is a significant difference between authenticity and low-income (p = 0.04), first-generation (p = 0.00), Caucasian (p = 0.02), and American Indian/Alaskan Native (p = 0.05) first-year students' sense of belonging, autonomy, and persistence. Academic exploration courses embedded in authenticity helps develop first-year students’ sense of belonging, autonomy, and persistence, which are effective traits of college students. As first-year students engage in content courses, professors can empower students to have greater engagement in their learning process by relating content to students' authenticity and helping students think critically about how content is authentic to them — how students' authenticity relates to the content, how students can take their content expertise into the future in ways that, to the student, authentically contribute to the greater good. A broader conversation within higher education needs to include 1) designing courses that allow students to develop and reflect upon their authenticity/to formulate answers to the questions: who am I, who am I becoming, and how will I move my authentic self forward; and 2) a discussion of how to shift from the university shaping students to the university facilitating the process of students shaping themselves.

Keywords: authenticity, first-year experience, sense of belonging, autonomy, persistence

Procedia PDF Downloads 107
398 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions

Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin

Abstract:

Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.

Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length

Procedia PDF Downloads 371
397 Care at the Intersection of Biomedicine and Traditional Chinese Medicine: Narratives of Integration, Negotiation, and Provision

Authors: Jessica Ding

Abstract:

The field of global health is currently advocating for a resurgence in the use of traditional medicines to improve people-centered care. Healthcare policies are rapidly changing in response; in China, the increasing presence of TCM in the same spaces as biomedicine has led to a new term: integrative medicine. However, the existence of TCM as a part of integrative medicine creates a pressing paradoxical tension where TCM is both seen as a marginalized system within ‘modern’ hospitals and as a modality worth integrating. Additionally, the impact of such shifts has not been fully explored: the World Health Organization for one focuses only on three angles —practices, products, and practitioners— with regards to traditional medicines. Through ten weeks of fieldwork conducted at an urban hospital in Shanghai, China, this research expands the perspective of existing strategies by looking at integrative care through a fourth lens: patients and families. The understanding of self-care, health-seeking behavior, and non-professional caregiving structures are critical to grasping the significance of traditional medicine for people-centered care. Indeed, those individual and informal health care expectations align with the very spaces and needs that traditional medicine has filled before such ideas of integration. It specifically looks at this issue via three processes that operationalize experiences of care: (1) how aspects of TCM are valued within integrative medicine, (2) how negotiations of care occur between patients and doctors, and (3) how 'good quality' caregiving presents in integrative clinical spaces. This research hopes to lend insight into how culturally embedded traditions, bureaucratic and institutional rationalities, and social patterns of health-seeking behavior influence care to shape illness experiences at the intersection of two medical modalities. This analysis of patients’ clinical and illness experiences serves to enrich the narratives of integrative medical care’s ability to provide patient-centered care to determine how international policies are realized at the individual level. This anthropological study of the integration of Traditional Chinese medicine in local contexts can reveal the extent to which global strategies, as promoted by the WHO and the Chinese government actually align with the expectations and perspectives of patients receiving care. Ultimately, this ethnographic analysis of a local Chinese context hopes to inform global policies regarding the future use and integration of traditional medicines.

Keywords: emergent systems, global health, integrative medicine, traditional Chinese medicine, TCM

Procedia PDF Downloads 122
396 A Selective and Fast Hydrogen Sensor Using Doped-LaCrO₃ as Sensing Electrode

Authors: He Zhang, Jianxin Yi

Abstract:

As a clean energy, hydrogen shows many advantages such as renewability, high heat value, and extensive sources and may play an important role in the future society. However, hydrogen is a combustible gas because of its low ignition energy (0.02mJ) and wide explosive limit (4% ~ 74% in air). It is very likely to cause fire hazard or explosion once leakage is happened and not detected in time. Mixed-potential type sensor has attracted much attention in monitoring and detecting hydrogen due to its high response, simple support electronics and long-term stability. Typically, this kind of sensor is consisted of a sensing electrode (SE), a reference electrode (RE) and a solid electrolyte. The SE and RE materials usually display different electrocatalytic abilities to hydrogen. So hydrogen could be detected by measuring the EMF change between the two electrodes. Previous reports indicate that a high-performance sensing electrode is important for improving the sensing characteristics of the sensor. In this report, a planar type mixed-potential hydrogen sensor using La₀.₈Sr₀.₂Cr₀.₅Mn₀.₅O₃₋δ (LSCM) as SE, Pt as RE and yttria-stabilized zirconia (YSZ) as solid electrolyte was developed. The reason for selecting LSCM as sensing electrode is that it shows the high electrocatalytic ability to hydrogen in solid oxide fuel cells. The sensing performance of the fabricated LSCM/YSZ/Pt sensor was tested systemically. The experimental results show that the sensor displays high response to hydrogen. The response values for 100ppm and 1000ppm hydrogen at 450 ºC are -70 mV and -118 mV, respectively. The response time is an important parameter to evaluate a sensor. In this report, the sensor response time decreases with increasing hydrogen concentration and get saturated above 500ppm. The steady response time at 450 ºC is as short as 4s, indicating the sensor shows great potential in practical application to monitor hydrogen. An excellent response repeatability to 100ppm hydrogen at 450 ˚C and a good sensor reproducibility among three sensors were also observed. Meanwhile, the sensor exhibits excellent selectivity to hydrogen compared with several interfering gases such as NO₂, CH₄, CO, C₃H₈ and NH₃. Polarization curves were tested to investigate the sensing mechanism and the results indicated the sensor abide by the mixed-potential mechanism.

Keywords: fire hazard, H₂ sensor, mixed-potential, perovskite

Procedia PDF Downloads 155
395 Awarding Copyright Protection to Artificial Intelligence Technology for its Original Works: The New Way Forward

Authors: Vibhuti Amarnath Madhu Agrawal

Abstract:

Artificial Intelligence (AI) and Intellectual Property are two emerging concepts that are growing at a fast pace and have the potential of having a huge impact on the economy in the coming times. In simple words, AI is nothing but work done by a machine without any human intervention. It is a coded software embedded in a machine, which over a period of time, develops its own intelligence and begins to take its own decisions and judgments by studying various patterns of how people think, react to situations and perform tasks, among others. Intellectual Property, especially Copyright Law, on the other hand, protects the rights of individuals and Companies in content creation that primarily deals with application of intellect, originality and expression of the same in some tangible form. According to some of the reports shared by the media lately, ChatGPT, an AI powered Chatbot, has been involved in the creation of a wide variety of original content, including but not limited to essays, emails, plays and poetry. Besides, there have been instances wherein AI technology has given creative inputs for background, lights and costumes, among others, for films. Copyright Law offers protection to all of these different kinds of content and much more. Considering the two key parameters of Copyright – application of intellect and originality, the question, therefore, arises that will awarding Copyright protection to a person who has not directly invested his / her intellect in the creation of that content go against the basic spirit of Copyright laws? This study aims to analyze the current scenario and provide answers to the following questions: a. If the content generated by AI technology satisfies the basic criteria of originality and expression in a tangible form, why should such content be denied protection in the name of its creator, i.e., the specific AI tool / technology? B. Considering the increasing role and development of AI technology in our lives, should it be given the status of a ‘Legal Person’ in law? C. If yes, what should be the modalities of awarding protection to works of such Legal Person and management of the same? Considering the current trends and the pace at which AI is advancing, it is not very far when AI will start functioning autonomously in the creation of new works. Current data and opinions on this issue globally reflect that they are divided and lack uniformity. In order to fill in the existing gaps, data obtained from Copyright offices from the top economies of the world have been analyzed. The role and functioning of various Copyright Societies in these countries has been studied in detail. This paper provides a roadmap that can be adopted to satisfy various objectives, constraints and dynamic conditions related AI technology and its protection under Copyright Law.

Keywords: artificial intelligence technology, copyright law, copyright societies, intellectual property

Procedia PDF Downloads 52
394 Hybrid Nanostructures of Acrylonitrile Copolymers

Authors: A. Sezai Sarac

Abstract:

Acrylonitrile (AN) copolymers with typical comonomers of vinyl acetate (VAc) or methyl acrylate (MA) exhibit better mechanical behaviors than its homopolymer. To increase processability of conjugated polymer, and to obtain a hybrid nano-structure multi-stepped emulsion polymerization was applied. Such products could be used in, i.e., drug-delivery systems, biosensors, gas-sensors, electronic compounds, etc. Incorporation of a number of flexible comonomers weakens the dipolar interactions among CN and thereby decreases melting point or increases decomposition temperatures of the PAN based copolymers. Hence, it is important to consider the effect of comonomer on the properties of PAN-based copolymers. Acrylonitrile vinylacetate (AN–VAc ) copolymers have the significant effect to their thermal behavior and are also of interest as precursors in the production of high strength carbon fibers. AN is copolymerized with one or two comonomers, particularly with vinyl acetate The copolymer of AN and VAc can be used either as a plastic (VAc > 15 wt %) or as microfibers (VAc < 15 wt %). AN provides the copolymer with good processability, electrochemical and thermal stability; VAc provides the mechanical stability. The free radical copolymerization of AN and VAc copolymer and core Shell structure of polyprrole composites,and nanofibers of poly(m-anthranilic acid)/polyacrylonitrile blends were recently studied. Free radical copolymerization of acrylonitrile (AN) – with different comonomers, i.e. acrylates, and styrene was realized using ammonium persulfate (APS) in the presence of a surfactant and in-situ polymerization of conjugated polymers was performed in this reaction medium to obtain core-shell nano particles. Nanofibers of such nanoparticles were obtained by electrospinning. Morphological properties of nanofibers are investigated by scanning electron microscopy (SEM) and atomic force spectroscopy (AFM). Nanofibers are characterized using Fourier Transform Infrared - Attenuated Total Reflectance spectrometer (FTIR-ATR), Nuclear Magnetic Resonance Spectroscopy (1H-NMR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA), and Electrochemical Impedance Spectroscopy. The electrochemical Impedance results of the nanofibers were fitted to an equivalent curcuit by modelling (ECM).

Keywords: core shell nanoparticles, nanofibers, ascrylonitile copolymers, hybrid nanostructures

Procedia PDF Downloads 364
393 Expression of Ki-67 in Multiple Myeloma: A Clinicopathological Study

Authors: Kangana Sengar, Sanjay Deb, Ramesh Dawar

Abstract:

Introduction: Ki-67 can be a useful marker in determining proliferative activity in patients with multiple myeloma (MM). However, using Ki-67 alone results in the erroneous inclusion of non-myeloma cells leading to false high counts. We have used Dual IHC (immunohistochemistry) staining with Ki-67 and CD138 to enhance specificity in assessing proliferative activity of bone marrow plasma cells. Aims and objectives: To estimate the proportion of proliferating (Ki-67 expressing) plasma cells in patients with MM and correlation of Ki-67 with other known prognostic parameters. Materials and Methods: Fifty FFPE (formalin fixed paraffin embedded) blocks of trephine biopsies of cases diagnosed as MM from 2010 to 2015 are subjected to H & E staining and Dual IHC staining for CD 138 and Ki-67. H & E staining is done to evaluate various histological parameters like percentage of plasma cells, pattern of infiltration (nodular, interstitial, mixed and diffuse), routine parameters of marrow cellularity and hematopoiesis. Clinical data is collected from patient records from Medical Record Department. Each of CD138 expressing cells (cytoplasmic, red) are scored as proliferating plasma cells (containing a brown Ki¬67 nucleus) or non¬proliferating plasma cells (containing a blue, counter-stained, Ki-¬67 negative nucleus). Ki-67 is measured as percentage positivity with a maximum score of hundred percent and lowest of zero percent. The intensity of staining is not relevant. Results: Statistically significant correlation of Ki-67 in D-S Stage (Durie & Salmon Stage) I vs. III (p=0.026) and ISS (International Staging System) Stage I vs. III (p=0.019), β2m (p=0.029) and percentage of plasma cells (p < 0.001) is seen. No statistically significant correlation is seen between Ki-67 and hemoglobin, platelet count, total leukocyte count, total protein, albumin, S. calcium, S. creatinine, S. LDH, blood urea and pattern of infiltration. Conclusion: Ki-67 index correlated with other known prognostic parameters. However, it is not determined routinely in patients with MM due to little information available regarding its relevance and paucity of studies done to correlate with other known prognostic factors in MM patients. To the best of our knowledge, this is the first study in India using Dual IHC staining for Ki-67 and CD138 in MM patients. Routine determination of Ki-67 will help to identify patients who may benefit with more aggressive therapy. Recommendation: In this study follow up of patients is not included, and the sample size is small. Studying with larger sample size and long follow up is advocated to prognosticate Ki-67 as a marker of survival in patients with multiple myeloma.

Keywords: bone marrow, dual IHC, Ki-67, multiple myeloma

Procedia PDF Downloads 121
392 Sub-Optimum Safety Performance of a Construction Project: A Multilevel Exploration

Authors: Tas Yong Koh, Steve Rowlinson, Yuzhong Shen

Abstract:

In construction safety management, safety climate has long been linked to workers' safety behaviors and performance. For this reason, safety climate concept and tools have been used as heuristics to diagnose a range of safety-related issues by some progressive contractors in Hong Kong and elsewhere. However, as a diagnostic tool, safety climate tends to treat the different components of the climate construct in a linear fashion. Safety management in construction projects, in reality, is a multi-faceted and multilevel phenomenon that resembles a complex system. Hence, understanding safety management in construction projects requires not only the understanding of safety climate but also the organizational-systemic nature of the phenomenon. Our involvement, diagnoses, and interpretations of a range of safety climate-related issues which culminated in the project’s sub-optimum safety performance in an infrastructure construction project have brought about such revelation. In this study, a range of data types had been collected from various hierarchies of the project site organization. These include the frontline workers and supervisors from the main and sub-contractors, and the client supervisory personnel. Data collection was performed through the administration of safety climate questionnaire, interviews, observation, and document study. The findings collectively indicate that what had emerged in parallel of the seemingly linear climate-based exploration is the exposition of the organization-systemic nature of the phenomenon. The results indicate the negative impacts of climate perceptions mismatch, insufficient work planning, and risk management, mixed safety leadership, workforce negative attributes, lapsed safety enforcement and resources shortages collectively give rise to the project sub-optimum safety performance. From the dynamic causation and multilevel perspective, the analyses show that the individual, group, and organizational levels issues are interrelated and these interrelationships are linked to negative safety climate. Hence the adoption of both perspectives has enabled a fuller understanding of the phenomenon of safety management that point to the need for an organizational-systemic intervention strategy. The core message points to the fact that intervention at an individual level will only meet with limited success if the risks embedded in the higher levels in group and project organization are not addressed. The findings can be used to guide the effective development of safety infrastructure by linking different levels of systems in a construction project organization.

Keywords: construction safety management, dynamic causation, multilevel analysis, safety climate

Procedia PDF Downloads 152
391 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea

Authors: Masoud Sakhinia, Sajjad Ahmad

Abstract:

Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.

Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR

Procedia PDF Downloads 256
390 Projected Uncertainties in Herbaceous Production Result from Unpredictable Rainfall Pattern and Livestock Grazing in a Humid Tropical Savanna Ecosystem

Authors: Daniel Osieko Okach, Joseph Otieno Ondier, Gerhard Rambold, John Tenhunen, Bernd Huwe, Dennis Otieno

Abstract:

Increased human activities such as grazing, logging, and agriculture alongside unpredictable rainfall patterns have been detrimental to the ecosystem service delivery, therefore compromising its productivity potential. This study aimed at simulating the impact of drought (50%) and enhanced rainfall (150%) on the future herbaceous CO2 uptake, biomass production and soil C:N dynamics in a humid savanna ecosystem influenced by livestock grazing. Rainfall pattern was predicted using manipulation experiments set up to reduce (50%) and increase (150%) ambient (100%) rainfall amounts in grazed and non-grazed plots. The impact of manipulated rainfall regime on herbaceous CO2 fluxes, biomass production and soil C:N dynamics was measured against volumetric soil water content (VWC) logged every 30 minutes using the 5TE (Decagon Devices Inc., Washington, USA) soil moisture sensors installed (at 20 cm soil depth) in every plots. Herbaceous biomass was estimated using destructive method augmented by standardized photographic imaging. CO2 fluxes were measured using the ecosystem chamber method and the gas analysed using LI-820 gas analyzer (USA). C:N ratio was calculated from the soil carbon and Nitrogen contents (analyzed using EA2400CHNS/O and EA2410 N elemental analyzers respectively) of different plots under study. The patterning of VWC was directly influenced by the rainfall amount with lower VWC observed in the grazed compared to the non-grazed plots. Rainfall variability, grazing and their interaction significantly affected changes in VWC (p < 0.05) and subsequently total biomass and CO2 fluxes. VWC had a strong influence on CO2 fluxes under 50% rainfall reduction in the grazed (r2 = 0.91; p < 0.05) and ambient rainfall in the ungrazed (r2 = 0.77; p < 0.05). The dependence of biomass on VWC across plots was enhanced under grazed (r2 = 0.78 - 0.87; p < 0.05) condition as compared to ungrazed (r2 = 0.44 - 0.85; p < 0.05). The C:N ratio was however not correlated to VWC across plots. This study provides insight on how the predicted trends in humid savanna will respond to changes influenced by rainfall variability and livestock grazing and consequently the sustainable management of such ecosystems.

Keywords: CO2 fluxes, rainfall manipulation, soil properties, sustainability

Procedia PDF Downloads 108
389 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams

Authors: Yu-Chen Wei

Abstract:

The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.

Keywords: human capital divergence, team human capital, team performance, team level research

Procedia PDF Downloads 211
388 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 203
387 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia

Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic

Abstract:

This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.

Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro

Procedia PDF Downloads 225
386 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 190
385 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 85
384 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 97
383 Examination of Relationship between Internet Addiction and Cyber Bullying in Adolescents

Authors: Adem Peker, Yüksel Eroğlu, İsmail Ay

Abstract:

As the information and communication technologies have become embedded in everyday life of adolescents, both their possible benefits and risks to adolescents are being identified. The information and communication technologies provide opportunities for adolescents to connect with peers and to access to information. However, as with other social connections, users of information and communication devices have the potential to meet and interact with in harmful ways. One emerging example of such interaction is cyber bullying. Cyber bullying occurs when someone uses the information and communication technologies to harass or embarrass another person. Cyber bullying can take the form of malicious text messages and e-mails, spreading rumours, and excluding people from online groups. Cyber bullying has been linked to psychological problems for cyber bullies and victims. Therefore, it is important to determine how internet addiction contributes to cyber bullying. Building on this question, this study takes a closer look at the relationship between internet addiction and cyber bullying. For this purpose, in this study, based on descriptive relational model, it was hypothesized that loss of control, excessive desire to stay online, and negativity in social relationships, which are dimensions of internet addiction, would be associated positively with cyber bullying and victimization. Participants were 383 high school students (176 girls and 207 boys; mean age, 15.7 years). Internet addiction was measured by using Internet Addiction Scale. The Cyber Victim and Bullying Scale was utilized to measure cyber bullying and victimization. The scales were administered to the students in groups in the classrooms. In this study, stepwise regression analyses were utilized to examine the relationships between dimensions of internet addiction and cyber bullying and victimization. Before applying stepwise regression analysis, assumptions of regression were verified. According to stepwise regression analysis, cyber bullying was predicted by loss of control (β=.26, p<.001) and negativity in social relationships (β=.13, p<.001). These variables accounted for 9 % of the total variance, with the loss of control explaining the higher percentage (8 %). On the other hand, cyber victimization was predicted by loss of control (β=.19, p<.001) and negativity in social relationships (β=.12, p<.001). These variables altogether accounted for 8 % of the variance in cyber victimization, with the best predictor loss of control (7 % of the total variance). The results of this study demonstrated that, as expected, loss of control and negativity in social relationships predicted cyber bullying and victimization positively. However, excessive desire to stay online did not emerge a significant predictor of both cyberbullying and victimization. Consequently, this study would enhance our understanding of the predictors of cyber bullying and victimization since the results proposed that internet addiction is related with cyber bullying and victimization.

Keywords: cyber bullying, internet addiction, adolescents, regression

Procedia PDF Downloads 292
382 On Grammatical Metaphors: A Corpus-Based Reflection on the Academic Texts Written in the Field of Environmental Management

Authors: Masoomeh Estaji, Ahdie Tahamtani

Abstract:

Considering the necessity of conducting research and publishing academic papers during Master’s and Ph.D. programs, graduate students are in dire need of improving their writing skills through either writing courses or self-study planning. One key feature that could aid academic papers to look more sophisticated is the application of grammatical metaphors (GMs). These types of metaphors represent the ‘non-congruent’ and ‘implicit’ ways of decoding meaning through which one grammatical category is replaced by another, more implied counterpart, which can alter the readers’ understanding of the text as well. Although a number of studies have been conducted on the application of GMs across various disciplines, almost none has been devoted to the field of environmental management, and the scope of the previous studies has been relatively limited compared to the present work. In the current study, attempts were made to analyze different types of GMs used in academic papers published in top-tiered journals in the field of environmental management, and make a list of the most frequently used GMs based on their functions in this particular discipline to make the teaching of academic writing courses more explicit and the composition of academic texts more well-structured. To fulfill these purposes, a corpus-based analysis based on the two theoretical models of Martin et al. (1997) and Liardet (2014) was run. Through two stages of manual analysis and concordancers, ten recent academic articles entailing 132490 words published in two prestigious journals were precisely scrutinized. The results yielded that through the whole IMRaD sections of the articles, among all types of ideational GMs, material processes were the most frequent types. The second and the third ranks would apply to the relational and mental categories, respectively. Regarding the use of interpersonal GMs, objective expanding metaphors were the highest in number. In contrast, subjective interpersonal metaphors, either expanding or contracting, were the least significant. This would suggest that scholars in the field of Environmental Management tended to shift the focus on the main procedures and explain technical phenomenon in detail, rather than to compare and contrast other statements and subjective beliefs. Moreover, since no instances of verbal ideational metaphors were detected, it could be deduced that the act of ‘saying or articulating’ something might be against the standards of the academic genre. One other assumption would be that the application of ideational GMs is context-embedded and that the more technical they are, the least frequent they become. For further studies, it is suggested that the employment of GMs to be studied in a wider scope and other disciplines, and the third type of GMs known as ‘textual’ metaphors to be included as well.

Keywords: English for specific purposes, grammatical metaphor, academic texts, corpus-based analysis

Procedia PDF Downloads 146
381 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 66
380 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 295
379 An Open-Source Guidance System for an Autonomous Planter Robot in Precision Agriculture

Authors: Nardjes Hamini, Mohamed Bachir Yagoubi

Abstract:

Precision agriculture has revolutionized farming by enabling farmers to monitor their crops remotely in real-time. By utilizing technologies such as sensors, farmers can detect the state of growth, hydration levels, and nutritional status and even identify diseases affecting their crops. With this information, farmers can make informed decisions regarding irrigation, fertilization, and pesticide application. Automated agricultural tasks, such as plowing, seeding, planting, and harvesting, are carried out by autonomous robots and have helped reduce costs and increase production. Despite the advantages of precision agriculture, its high cost makes it inaccessible to small and medium-sized farms. To address this issue, this paper presents an open-source guidance system for an autonomous planter robot. The system is composed of a Raspberry Pi-type nanocomputer equipped with Wi-Fi, a GPS module, a gyroscope, and a power supply module. The accompanying application allows users to enter and calibrate maps with at least four coordinates, enabling the localized contour of the parcel to be captured. The application comprises several modules, such as the mission entry module, which traces the planting trajectory and points, and the action plan entry module, which creates an ordered list of pre-established tasks such as loading, following the plan, returning to the garage, and entering sleep mode. A remote control module enables users to control the robot manually, visualize its location on the map, and use a real-time camera. Wi-Fi coverage is provided by an outdoor access point, covering a 2km circle. This open-source system offers a low-cost alternative for small and medium-sized farms, enabling them to benefit from the advantages of precision agriculture.

Keywords: autonomous robot, guidance system, low-cost, medium farms, open-source system, planter robot, precision agriculture, real-time monitoring, remote control, small farms

Procedia PDF Downloads 83
378 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 25
377 A Literature Study on IoT Based Monitoring System for Smart Agriculture

Authors: Sonu Rana, Jyoti Verma, A. K. Gautam

Abstract:

In most developing countries like India, the majority of the population heavily relies on agriculture for their livelihood. The yield of agriculture is heavily dependent on uncertain weather conditions like a monsoon, soil fertility, availability of irrigation facilities and fertilizers as well as support from the government. The agricultural yield is quite less compared to the effort put in due to inefficient agricultural facilities and obsolete farming practices on the one hand and lack of knowledge on the other hand, and ultimately agricultural community does not prosper. It is therefore essential for the farmers to improve their harvest yield by the acquisition of related data such as soil condition, temperature, humidity, availability of irrigation facilities, availability of, manure, etc., and adopt smart farming techniques using modern agricultural equipment. Nowadays, using IOT technology in agriculture is the best solution to improve the yield with fewer efforts and economic costs. The primary focus of this work-related is IoT technology in the agriculture field. By using IoT all the parameters would be monitored by mounting sensors in an agriculture field held at different places, will collect real-time data, and could be transmitted by a transmitting device like an antenna. To improve the system, IoT will interact with other useful systems like Wireless Sensor Networks. IoT is exploring every aspect, so the radio frequency spectrum is getting crowded due to the increasing demand for wireless applications. Therefore, Federal Communications Commission is reallocating the spectrum for various wireless applications. An antenna is also an integral part of the newly designed IoT devices. The main aim is to propose a new antenna structure used for IoT agricultural applications and compatible with this new unlicensed frequency band. The main focus of this paper is to present work related to these technologies in the agriculture field. This also presented their challenges & benefits. It can help in understanding the job of data by using IoT and correspondence advancements in the horticulture division. This will help to motivate and educate the unskilled farmers to comprehend the best bits of knowledge given by the huge information investigation utilizing smart technology.

Keywords: smart agriculture, IoT, agriculture technology, data analytics, smart technology

Procedia PDF Downloads 87
376 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 94