Search results for: powerful
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1010

Search results for: powerful

230 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 306
229 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 180
228 Smart Construction Sites in KSA: Challenges and Prospects

Authors: Ahmad Mohammad Sharqi, Mohamed Hechmi El Ouni, Saleh Alsulamy

Abstract:

Due to the emerging technologies revolution worldwide, the need to exploit and employ innovative technologies for other functions and purposes in different aspects has become a remarkable matter. Saudi Arabia is considered one of the most powerful economic countries in the world, where the construction sector participates effectively in its economy. Thus, the construction sector in KSA should convoy the rapid digital revolution and transformation and implement smart devices on sites. A Smart Construction Site (SCS) includes smart devices, artificial intelligence, the internet of things, augmented reality, building information modeling, geographical information systems, and cloud information. This paper aims to study the level of implementation of SCS in KSA, analyze the obstacles and challenges of adopting SCS and find out critical success factors for its implementation. A survey of close-ended questions (scale and multi-choices) has been conducted on professionals in the construction sector of Saudi Arabia. A total number of twenty-nine questions has been prepared for respondents. Twenty-four scale questions were established, and those questions were categorized into several themes: quality, scheduling, cost, occupational safety and health, technologies and applications, and general perception. Consequently, the 5-point Likert scale tool (very low to very high) was adopted for this survey. In addition, five close-ended questions with multi-choice types have also been prepared; these questions have been derived from a previous study implemented in the United Kingdom (UK) and the Dominic Republic (DR), these questions have been rearranged and organized to fit the structured survey in order to place the Kingdom of Saudi Arabia in comparison with the United Kingdom (UK) as well as the Dominican Republic (DR). A total number of one hundred respondents have participated in this survey from all regions of the Kingdom of Saudi Arabia: southern, central, western, eastern, and northern regions. The drivers, obstacles, and success factors for implementing smart devices and technologies in KSA’s construction sector have been investigated and analyzed. Besides, it has been concluded that KSA is on the right path toward adopting smart construction sites with attractive results comparable to and even better than the UK in some factors.

Keywords: artificial intelligence, construction projects management, internet of things, smart construction sites, smart devices

Procedia PDF Downloads 143
227 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 138
226 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 70
225 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques

Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh

Abstract:

In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.

Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network

Procedia PDF Downloads 54
224 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials

Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen

Abstract:

Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.

Keywords: placebo analgesia, aging, acute pain, TENS, EEG

Procedia PDF Downloads 135
223 The Relations Between Hans Kelsen’s Concept of Law and the Theory of Democracy

Authors: Monika Zalewska

Abstract:

Hans Kelsen was a versatile legal thinker whose achievements in the fields of legal theory, international law, and the theory of democracy are remarkable. All of the fields tackled by Kelsen are regarded as part of his “pure theory of law.” While the link between international law and Kelsen’s pure theory of law is apparent, the same cannot be said about the link between the theory of democracy and his pure theory of law. On the contrary, the general thinking concerning Kelsen’s thought is that it can be used to legitimize authoritarian regimes. The aim of this presentation is to address this concern by identifying the common ground between Kelsen’s pure theory of law and his theory of democracy and to show that they are compatible in a way that his pure theory of law and authoritarianism cannot be. The conceptual analysis of the purity of Kelsen’s theory and his goal of creating ideology-free legal science hints at how Kelsen’s pure theory of law and the theory of democracy are brought together. The presentation will first demonstrate that these two conceptions have common underlying values and meta-ethical convictions. Both are founded on relativism and a rational worldview, and the aim of both is peaceful co-existence. Second, it will be demonstrated that the separation of law and morality provides the maximum space for deliberation within democratic processes. The conclusion of this analysis is that striking similarities exist between Kelsen’s legal theory and his theory of democracy. These similarities are grounded in the Enlightenment tradition and its values, including rationality, a scientific worldview, tolerance, and equality. This observation supports the claim that, for Kelsen, legal positivism and the theory of democracy are not two separate theories but rather stem from the same set of values and from Kelsen’s relativistic worldview. Furthermore, three main issues determine Kelsen’s orientation toward a positivistic and democratic outlook. The first, which is associated with personality type, is the distinction between absolutism and relativism. The second, which is associated with the values that Kelsen favors in the social order, is peace. The third is legality, which creates the necessary condition for democracy to thrive and reveals that democracy is capable of fulfilling Kelsen’s ideal of law at its fullest. The first two categories exist in the background of Kelsen’s pure theory of law, while the latter is an inherent part of Kelsen’s concept of law. The analysis of the text concerning natural law doctrine and democracy indicates that behind the technical language of Kelsen’s pure theory of law is a strong concern with the trends that appeared after World War I. Despite his rigorous scientific mind, Kelsen was deeply humanistic. He tried to create a powerful intellectual weapon to provide strong arguments for peaceful coexistence and a rational outlook in Europe. The analysis provided by this presentation facilitates a broad theoretical, philosophical, and political understanding of Kelsen’s perspectives and, consequently, urges a strong endorsement of Kelsen’s approach to constitutional democracy.

Keywords: hans kelsen, democracy, legal positivism, pure theory of law

Procedia PDF Downloads 94
222 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 18
221 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility

Procedia PDF Downloads 252
220 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 52
219 The Ethics of Documentary Filmmaking Discuss the Ethical Considerations and Responsibilities of Documentary Filmmakers When Portraying Real-life Events and Subjects

Authors: Batatunde Kolawole

Abstract:

Documentary filmmaking stands as a distinctive medium within the cinematic realm, commanding a unique responsibility the portrayal of real-life events and subjects. This research delves into the profound ethical considerations and responsibilities that documentary filmmakers shoulder as they embark on the quest to unveil truth and weave compelling narratives. In the exploration, they embark on a comprehensive review of ethical frameworks and real-world case studies, illuminating the intricate web of challenges that documentarians confront. These challenges encompass an array of ethical intricacies, from securing informed consent to safeguarding privacy, maintaining unwavering objectivity, and sidestepping the snares of narrative manipulation when crafting stories from reality. Furthermore, they dissect the contemporary ethical terrain, acknowledging the emergence of novel dilemmas in the digital age, such as deepfakes and digital alterations. Through a meticulous analysis of ethical quandaries faced by distinguished documentary filmmakers and their strategies for ethical navigation, this study offers invaluable insights into the evolving role of documentaries in molding public discourse. They underscore the indispensable significance of transparency, integrity, and an indomitable commitment to encapsulating the intricacies of reality within the realm of ethical documentary filmmaking. In a world increasingly reliant on visual narratives, an understanding of the subtle ethical dimensions of documentary filmmaking holds relevance not only for those behind the camera but also for the diverse audiences who engage with and interpret the realities unveiled on screen. This research stands as a rigorous examination of the moral compass that steers this potent form of cinematic expression. It emphasizes the capacity of ethical documentary filmmaking to enlighten, challenge, and inspire, all while unwaveringly upholding the core principles of truthfulness and respect for the human subjects under scrutiny. Through this holistic analysis, they illuminate the enduring significance of upholding ethical integrity while uncovering the truths that shape our world. Ethical documentary filmmaking, as exemplified by "Rape" and countless other powerful narratives, serves as a testament to the enduring potential of cinema to inform, challenge, and drive meaningful societal discourse.

Keywords: filmmaking, documentary, human right, film

Procedia PDF Downloads 53
218 Effects of Exercise Training in the Cold on Browning of White Fat in Obese Rats

Authors: Xiquan Weng, Chaoge Wang, Guoqin Xu, Wentao Lin

Abstract:

Objective: Cold exposure and exercise serve as two powerful physiological stimuli to launch the conversion of fat-accumulating white adipose tissue (WAT) into energy-dissipating brown adipose tissue (BAT). So far, it remains to be elucidated whether exercise plus cold exposure can produce an addictive effect on promoting WAT browning. Methods: 64 SD rats were subjected to high-fat and high-sugar diets for 9-week and successfully established an obesity model. They were randomly divided into 8 groups: normal control group (NC), normal exercise group (NE), continuous cold control group (CC), continuous cold exercise group (CE), intermittent cold control group (IC) and intermittent cold exercise group (IE). For continuous cold exposure, the rats stayed in a cold environment all day; For intermittent cold exposure, the rats were exposed to cold for only 4h per day. The protocol for treadmill exercises were as follows: 25m/min (speed), 0°C (slope), 30mins each time, an interval for 10 mins between two exercises, twice/two days, lasting for 5 weeks. Sampling were conducted on the 5th weekend. The body length and weight of the rats were measured, and the Lee's index was calculated. The visceral fat rate (VFR), subcutaneous fat rate (SFR), brown fat rate (BrFR) and body fat rate (BoFR) were measured by Micro-CT LCT200, and the expression of UCP1 protein in inguinal fat was examined by Western-blot. SPSS 22.0 was used for statistical analysis of the experimental results, and the ANOVA analysis was performed between groups (P < 0.05 was significant). Results: (1) Compared with the NC group, the weight of obese rats was significantly declined in the NE, CE and IE groups (P < 0.05), the Lee's index of obese rats significantly declined in the CE group (P < 0.05). Compared with the NE group, the weight of obese rats was significantly declined in the CE and IE groups (P < 0.05). (2)Compared with the NC group, the VFR and BoFR of the rats significantly declined in the NE, CE and IE groups (P < 0.05), the SFR of the rats significantly declined in the CE and IE groups (P < 0.05), and the BFR of the rats was significantly higher in the CC and IC groups (P < 0.05), respectively. Compared with the NE group, the VFR and BoFR of the rats significantly declined in the CE group (P < 0.05), the SFR of the rats was significantly higher in the CC and IS groups (P < 0.05), and the BrFR of the rats was significantly higher in the IC group (P < 0.05). (3)Compared with the NC group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the NE, CC, CE, IC and IE groups (P < 0.05). Compared with the NE group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the CC, CE and IE groups (P < 0.05). Conclusions: Exercise in the continuous and intermittent cold, especially in the former, can effectively decline the weight and body fat rate of obese rats. This is related to the effect of cold and exercise on the browning of white fat in rats.

Keywords: cold, browning of white fat, exercise, obesity

Procedia PDF Downloads 127
217 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach

Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf

Abstract:

The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.

Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization

Procedia PDF Downloads 120
216 Examination of Teacher Candidates Attitudes Towards Disabled Individuals Employment in terms of Various Variables

Authors: Tuna Şahsuvaroğlu

Abstract:

The concept of disability is a concept that has been the subject of many studies in national and international literature with its social, sociological, political, anthropological, economic and social dimensions as well as with individual and social consequences. A disabled person is defined as a person who has difficulties in adapting to social life and meeting daily needs due to loss of physical, mental, spiritual, sensory and social abilities to various degrees, either from birth or for any reason later, and they are in need of protection, care, rehabilitation, counseling and support services. The industrial revolution and the rapid industrialization it brought with it led to an increase in the rate of disabilities resulting from work accidents, in addition to congenital disabilities. This increase has resulted in disabled people included in the employment policies of nations as a disadvantaged group. Although the participation of disabled individuals in the workforce is of great importance in terms of both increasing their quality of life and their integration with society and although disabled individuals are willing to participate in the workforce, they encounter with many problems. One of these problems is the negative attitudes and prejudices that develop in society towards the employment of disabled individuals. One of the most powerful ways to turn these negative attitudes and prejudices into positive ones is education. Education is a way of guiding societies and transferring existing social characteristics to future generations. This can be maintained thanks to teachers, who are one of the most dynamic parts of society and act as the locomotive of education driven by the need to give direction and transfer and basically to help and teach. For this reason, there is a strong relationship between the teaching profession and the attitudes formed in society towards the employment of disabled individuals, as they can influence each other. Therefore, the purpose of this study is to examine teacher candidates' attitudes towards the employment of disabled individuals in terms of various variables. The participants of the study consist of 665 teacher candidates studying at various departments at Marmara University Faculty of Education in the 2022-2023 academic year. The descriptive survey model of the general survey model was used in this study as it intends to determine the attitudes of teacher candidates towards the employment of disabled individuals in terms of different variables. The Attitude Scale Towards Employment of Disabled People was used to collect data. The data were analyzed according to the variables of age, gender, marital status, the department, and whether there is a disabled relative in the family, and the findings were discussed in the context of further research.

Keywords: teacher candidates, disabled, attitudes towards the employment of disabled people, attitude scale towards the employment of disabled people

Procedia PDF Downloads 55
215 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 357
214 Traumatic Events, Post-traumatic Symptoms, Personal Resilience, Quality of Life, and Organizational Com Mitment Among Midwives: A Cross-Sectional Study

Authors: Kinneret Segal

Abstract:

The work of a midwife is emotionally challenging, both positively and negatively. Midwives share moments of joy when a baby is welcomed into the world, and also attend difficult events of loss and trauma. The relationship that develops with the maternity is the essence of the midwife's care, and it is a fundamental source of motivation and professional satisfaction. This close relationship with the maternity may be used as a double-edged sword in cases of exposure to traumatic events at birth. Birth problems, exposure to emergencies and traumatic events, and loss can affect the professional quality of life and the Compassion satisfaction of the midwife. It seems that the issue of traumatic experiences in the work of midwives, has not been sufficiently explored. The present study examined the associations between exposure to traumatic events, personal resilience and post-traumatic symptoms, professional quality of life and organizational commitment among midwifery nurses in Israeli hospitals. 131 midwives from three hospitals in the country's center in Israel participated in this study. The data were collected during 2021 using a self-report questionnaire that examined sociodemographic characteristics, the degree of exposure to traumatic events in the delivery room, personal resilience, post-traumatic symptoms, professional quality of life, and organizational commitment. The three most difficult traumatic events for the midwives were death or fear of death of a newborn, death or fear of the death of a mother and a quiet birth. The higher the frequency of exposure to traumatic events, the more numerous and intense the onset of post-trauma symptoms. The more numerous and powerful the post-trauma symptoms, the higher the level of professional burnout and/or compassion fatigue, and the lower the level of compassion satisfaction. High levels of compassion satisfaction and/or low professional burnout were expressed in a heightened sense of organizational commitment. Personal resilience, country of birth, traumatic symptoms and organizational commitment, predicted satisfaction from compassion. Midwives are exposed to traumatic events associated with dissatisfaction and impairment of the professional quality of life that accompanies burnout and compassion fatigue. Exposure to traumatic events leads to the appearance of traumatic symptoms, a decrease in organizational commitment, and psychological and mental well-being. The issue needs to be addressed by implementing training programs, organizational support, and policies to improving well-being and quality of care among midwives.

Keywords: traumatic experirnces, midwives, quality of life, burnout, organizational commitment, personal resilience

Procedia PDF Downloads 81
213 Organized Crime-A Social Challenge for Kosovo towards European Union Integration

Authors: Samedin Mehmeti

Abstract:

Very tens political and economic situation, in particular armed conflicts that followed at the time of the destruction of the former Yugoslavia, influenced migrations and displacement of population. Especially setting international sanctions and embargo influenced the creation of organized criminal groups. A lot of members of the former Yugoslav security apparatus in collaboration with ordinary criminal groups engaged in: smuggling of goods, petroleum and arms, sale and transport of drugs, payable murder, damage to public property, kidnappings, extortion, racketeering, etc. This tradition of criminality, of course in other forms and with other methods, has continued after conflicts and continues with a high intensity even in nowadays. One of the most delicate problems of organized crime activity is the impact on the economic sphere, where organized crime opposes and severely damages national security and economy to criminalize it in certain sectors and directions. Organized crime groups including who find Kosovo as a place to develop their criminal activities are characterized by: loyalty of many people especially through family connections and kinship in carrying out criminal activities and the existence of powerful hierarchy of leadership which in many cases include the corrupt officials of state apparatus. Groups have clear hierarchy and flexible structure of command, each member within the criminal group knows his duties concrete. According to statistics presented in police reports its notable that Kosovo has a large number of cases of organized crime, cultivation, trafficking and possession of narcotics. As already is very well known that one of the primary conditions that must be fulfilled on track toward integration in the European Union is precisely to prevent and combat organized crime. Kosovo has serious problems with prosecutorial and judicial system. But the misuse of public funds, even those coming directly from EU budget or the budget of the European Union member states, have a negative impact on this process. The economic crisis that has gripped some of the EU countries has led to the creation of an environment in which there are far fewer resources and opportunities to invest in preventing and combating organized crime within member states. This automatically reduces the level of financial support for other countries in the fight against organized crime. Kosovo as a poor country, now has less likely benefiting from the support tools that will be eventually offered by Europe set of in this area.

Keywords: police, european integration, organized crime, narcotics

Procedia PDF Downloads 427
212 Development of mHealth Information in Community Based on Geographical Information: A Case Study from Saraphi District, Chiang Mai, Thailand

Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Wilawan Senaratana, Jaras Singkaew

Abstract:

Geographical information system (GIS) is a designated system widely used for collecting and analyzing geographical data. Since the introduction of ultra-mobile, 'smart' devices, investigators, clinicians, and even the general public have had powerful new tools for collecting, uploading and accessing information in the field. Epidemiology paired with GIS will increase the efficacy of preventive health care services. The objective of this study is to apply GPS location services that are available on the common mobile device with district health systems, storing data on our private cloud system. The mobile application has been developed for use on iOS, Android, and web-based platforms. The system consists of two parts of district health information, including recorded resident data forms and individual health recorded data forms, which were developed and approved by opinion sharing and public hearing. The application's graphical user interface was developed using HTML5 and PHP with MySQL as a database management system (DBMS). The reporting module of the developed software displays data in a variety of views, from traditional tables to various types of high-resolution, layered graphics, incorporating map location information with street views from Google Maps. Multi-extension exporting is also supported, utilizing standard platforms such as PDF, PNG, JPG, and XLS. The data were collected in the database beginning in March 2013, by district health volunteers and district youth volunteers who had completed the application training program. District health information consisted of patients’ household coordinates, individual health data, social and economic information. This was combined with Google Street View data, collected in March 2014. Studied groups consisted of 16,085 (67.87%) and 47,811 (59.87%) of the total 23,701 households and 79,855 people were collected by the system respectively, in Saraphi district, Chiang Mai Province. The report generated from the system has had a major benefit directly to the Saraphi District Hospital. Healthcare providers are able to use the basic health data to provide a specific home health care service and also to create health promotion activities according to medical needs of the people in the community.

Keywords: health, public health, GIS, geographic information system

Procedia PDF Downloads 324
211 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes

Authors: Ipek Kivanc, Demet Ozgur-Unluakin

Abstract:

Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.

Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes

Procedia PDF Downloads 129
210 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 142
209 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: codeswitching, L1 use, L2 teaching, learners’ perception

Procedia PDF Downloads 306
208 The Motivational Factors of Learning Languages for Specific Purposes

Authors: Janos Farkas, Maria Czeller, Ildiko Tar

Abstract:

A remarkable feature of today’s language teaching is the learners’ language learning motivation. It is always considered as a very important factor and has been widely discussed and investigated. This paper aims to present a research study conducted in higher education institutions among students majoring in business and administration in Hungary. The aim of the research was to investigate the motivational factors of students learning languages for business purposes and set up a multivariate statistical model of language learning motivation, and examine the model's main components by different social background variables. The research question sought to answer the question of whether the motivation of students of business learning LSP could be characterized through some main components. The principal components of LSP have been created, and the correlations with social background variables have been explored. The main principal components of learning a language for business purposes were "professional future", "abroad", "performance", and "external". In the online voluntary questionnaire, 28 questions were asked about students’ motivational attitudes. 449 students have filled in the questionnaire. Descriptive statistical calculations were performed, then the difference between the highest and lowest mean was analyzed by one-sample t-test. The assessment of LSP learning was examined by one-way analysis of variance and Tukey post-hoc test among students of parents with different qualifications. The correlations between student motivation statements and various social background variables and other variables related to LSP learning motivation (gender, place of residence, mother’s education, father’s education, family financial situation, etc.) have also been examined. The attitudes related to motivation were seperated by principal component analysis, and then the different language learning motivation between socio-economic variables and other variables using principal component values were examined using an independent two-sample t-test. The descriptive statistical analysis of language learning motivation revealed that students learn LSP because this knowledge will come in handy in the future. It can be concluded that students consider learning the language for business purposes to be essential and see its future benefits. Therefore, LSP teaching has an important role and place in higher education. The results verify the second linguistic motivational self-system where the ideal linguistic self embraces the ideas and desires that the foreign language learner wants to achieve in the future. One such desire is to recognize that students will need technical language skills in the future, and it is a powerful motivation for them to learn a language.

Keywords: higher education, language learning motivation, LSP, statistical analysis

Procedia PDF Downloads 88
207 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments

Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva

Abstract:

It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.

Keywords: industrial environment, material standards, optical measuring instrument, traceability

Procedia PDF Downloads 112
206 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today

Authors: Sangeet Anand

Abstract:

Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.

Keywords: natural, symbolism, tattoo, terrestrial

Procedia PDF Downloads 100
205 Noncovalent Antibody-Nanomaterial Conjugates: A Simple Approach to Produce Targeted Nanomedicines

Authors: Nicholas Fletcher, Zachary Houston, Yongmei Zhao, Christopher Howard, Kristofer Thurecht

Abstract:

One promising approach to enhance nanomedicine therapeutic efficacy is to include a targeting agent, such as an antibody, to increase accumulation at the tumor site. However, the application of such targeted nanomedicines remains limited, in part due to difficulties involved with biomolecule conjugation to synthetic nanomaterials. One approach recently developed to overcome this has been to engineer bispecific antibodies (BsAbs) with dual specificity, whereby one portion binds to methoxy polyethyleneglycol (mPEG) epitopes present on synthetic nanomedicines, while the other binds to molecular disease markers of interest. In this way, noncovalent complexes of nanomedicine core, comprising a hyperbranched polymer (HBP) of primarily mPEG, decorated with targeting ligands are able to be produced by simple mixing. Further work in this area has now demonstrated such complexes targeting the breast cancer marker epidermal growth factor receptor (EGFR) to show enhanced binding to tumor cells both in vitro and in vivo. Indeed the enhanced accumulation at the tumor site resulted in improved therapeutic outcomes compared to untargeted nanomedicines and free chemotherapeutics. The current work on these BsAb-HBP conjugates focuses on further probing antibody-nanomaterial interactions and demonstrating broad applicability to a range of cancer types. Herein are reported BsAb-HBP materials targeted towards prostate-specific membrane antigen (PSMA) and study of their behavior in vivo using ⁸⁹Zr positron emission tomography (PET) in a dual-tumor prostate cancer xenograft model. In this model mice bearing both PSMA+ and PSMA- tumors allow for PET imaging to discriminate between nonspecific and targeted uptake in tumors, and better quantify the increased accumulation following BsAb conjugation. Also examined is the potential for formation of these targeted complexes in situ following injection of individual components? The aim of this approach being to avoid undesirable clearance of proteinaceous complexes upon injection limiting available therapeutic. Ultimately these results demonstrate BsAb functionalized nanomaterials as a powerful and versatile approach for producing targeted nanomedicines for a variety of cancers.

Keywords: bioengineering, cancer, nanomedicine, polymer chemistry

Procedia PDF Downloads 132
204 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 157
203 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry

Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters

Abstract:

Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.

Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling

Procedia PDF Downloads 404
202 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 331
201 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 289