Search results for: tightly coupled memory
631 Just Not Seeing It: Exploring the Relationship between Inattention Blindness and Banner Blindness
Authors: Carie Cunningham, Krsiten Lynch
Abstract:
Despite a viewer’s thought that they may be paying attention, many times they are missing out on their surrounds-- a phenomenon referred to as inattentional blindness. Inattention blindness refers to the failure of an individual to orient their attention to a particular item in their visual field. This well-defined in the psychology literature. Similarly, this phenomenon has been evaluated in media types in advertising. In advertising, not comprehending/remembering items in one’s field of vision is known as banner blindness. On the other hand, banner blindness is a phenomenon that occurs when individuals habitually see a banner in a specific area on a webpage, and thus condition themselves to ignore those habitual areas. Another reason that individuals avoid these habitual areas (usually on the top or sides of a webpage) is due to the lack of personal relevance or pertinent information to the viewer. Banner blindness, while a web-based concept, may also relate this inattention blindness. This paper is proposing an analysis of the true similarities and differences between these concepts bridging the two dimensions of thinking together. Forty participants participated in an eye-tracking and post-survey experiment to test attention and memory measures in both a banner blindness and inattention blindness condition. The two conditions were conducted between subjects semi-randomized order. Half of participants were told to search through the content ignoring the advertising banners; the other half of participants were first told to search through the content ignoring the distractor icon. These groups were switched after 5 trials and then 5 more trials were completed. In review of the literature, sustainability communication was found to have many inconsistencies with message production and viewer awareness. For the purpose of this study, we used advertising materials as stimuli. Results suggest that there are gaps between the two concepts and that more research should be done testing these effects in a real world setting versus an online environment. This contributes to theory by exploring the overlapping concepts—inattention blindness and banner blindness and providing the advertising industry with support that viewers can still fall victim to ignoring items in their field of view even if not consciously, which will impact message development.Keywords: attention, banner blindness, eye movement, inattention blindness
Procedia PDF Downloads 275630 Distant Speech Recognition Using Laser Doppler Vibrometer
Authors: Yunbin Deng
Abstract:
Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR
Procedia PDF Downloads 179629 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 152628 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield
Authors: Sonia Barbouchi, Meriem Samcha
Abstract:
Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.Keywords: compatibility study, produced water, scaling, water injection
Procedia PDF Downloads 166627 Ecological Risk Assessment of Informal E-Waste Processing in Alaba International Market, Lagos, Nigeria
Authors: A. A. Adebayo, O. Osibanjo
Abstract:
Informal electronic waste (e-waste) processing is a crude method of recycling, which is on the increase in Nigeria. The release of hazardous substances such as heavy metals (HMs) into the environment during informal e-waste processing has been a major concern. However, there is insufficient information on environmental contamination from e-waste recycling, associated ecological risk in Alaba International Market, a major electronic market in Lagos, Nigeria. The aims of this study were to determine the levels of HMs in soil, resulting from the e-waste recycling; and also assess associated ecological risks in Alaba international market. Samples of soils (334) were randomly collected seasonally for three years from fourteen selected e-waste activity points and two control sites. The samples were digested using standard methods and HMs analysed by inductive coupled plasma optical emission. Ecological risk was estimated using Ecological Risk index (ER), Potential Ecological Risk index (RI), Index of geoaccumulation (Igeo), Contamination factor (Cf) and degree of contamination factor (Cdeg). The concentrations range of HMs (mg/kg) in soil were: 16.7-11200.0 (Pb); 14.3-22600.0 (Cu); 1.90-6280.0 (Ni), 39.5-4570.0 (Zn); 0.79-12300.0 (Sn); 0.02-138.0 (Cd); 12.7-1710.0 (Ba); 0.18-131.0 (Cr); 0.07-28.0 (V), while As was below detection limit. Concentrations range in control soils were 1.36-9.70 (Pb), 2.06-7.60 (Cu), 1.25-5.11 (Ni), 3.62-15.9 (Zn), BDL-0.56 (Sn), BDL-0.01 (Cd), 14.6-47.6 (Ba), 0.21–12.2 (Cr) and 0.22-22.2 (V). The trend in ecological risk index was in the order Cu > Pb > Ni > Zn > Cr > Cd > Ba > V. The potential ecological risk index with respect to informal e-waste activities were: burning > dismantling > disposal > stockpiling. The index of geo accumulation indices revealed that soils were extremely polluted with Cd, Cu, Pb, Zn and Ni. The contamination factor indicated that 93% of the studied areas have very high contamination status for Pb, Cu, Ba, Sn and Co while Cr and Cd were in the moderately contaminated status. The degree of contamination decreased in the order of Sn > Cu > Pb >> Zn > Ba > Co > Ni > V > Cr > Cd. Heavy metal contamination of Alaba international market environment resulting from informal e-waste processing was established. Proper management of e-waste and remediation of the market environment are recommended to minimize the ecological risks.Keywords: Alaba international market, ecological risk, electronic waste, heavy metal contamination
Procedia PDF Downloads 198626 Coils and Antennas Fabricated with Sewing Litz Wire for Wireless Power Transfer
Authors: Hikari Ryu, Yuki Fukuda, Kento Oishi, Chiharu Igarashi, Shogo Kiryu
Abstract:
Recently, wireless power transfer has been developed in various fields. Magnetic coupling is popular for feeding power at a relatively short distance and at a lower frequency. Electro-magnetic wave coupling at a high frequency is used for long-distance power transfer. The wireless power transfer has attracted attention in e-textile fields. Rigid batteries are required for many body-worn electric systems at the present time. The technology enables such batteries to be removed from the systems. Flexible coils have been studied for such applications. Coils with a high Q factor are required in the magnetic-coupling power transfer. Antennas with low return loss are needed for the electro-magnetic coupling. Litz wire is so flexible to fabricate coils and antennas sewn on fabric and has low resistivity. In this study, the electric characteristics of some coils and antennas fabricated with the Litz wire by using two sewing techniques are investigated. As examples, a coil and an antenna are described. Both were fabricated with 330/0.04 mm Litz wire. The coil was a planar coil with a square shape. The outer side was 150 mm, the number of turns was 15, and the pitch interval between each turn was 5 mm. The Litz wire of the coil was overstitched with a sewing machine. The coil was fabricated as a receiver coil for a magnetic coupled wireless power transfer. The Q factor was 200 at a frequency of 800 kHz. A wireless power system was constructed by using the coil. A power oscillator was used in the system. The resonant frequency of the circuit was set to 123 kHz, where the switching loss of power FETs was small. The power efficiencies were 0.44 – 0.99, depending on the distance between the transmitter and receiver coils. As an example of an antenna with a sewing technique, a fractal pattern antenna was stitched on a 500 mm x 500 mm fabric by using a needle punch method. The pattern was the 2nd-oder Vicsec fractal. The return loss of the antenna was -28 dB at a frequency of 144 MHz.Keywords: e-textile, flexible coils and antennas, Litz wire, wireless power transfer
Procedia PDF Downloads 133625 Urea and Starch Detection on a Paper-Based Microfluidic Device Enabled on a Smartphone
Authors: Shashank Kumar, Mansi Chandra, Ujjawal Singh, Parth Gupta, Rishi Ram, Arnab Sarkar
Abstract:
Milk is one of the basic and primary sources of food and energy as we start consuming milk from birth. Hence, milk quality and purity and checking the concentration of its constituents become necessary steps. Considering the importance of the purity of milk for human health, the following study has been carried out to simultaneously detect and quantify the different adulterants like urea and starch in milk with the help of a paper-based microfluidic device integrated with a smartphone. The detection of the concentration of urea and starch is based on the principle of colorimetry. In contrast, the fluid flow in the device is based on the capillary action of porous media. The microfluidic channel proposed in the study is equipped with a specialized detection zone, and it employs a colorimetric indicator undergoing a visible color change when the milk gets in touch or reacts with a set of reagents which confirms the presence of different adulterants in the milk. In our proposed work, we have used iodine to detect the percentage of starch in the milk, whereas, in the case of urea, we have used the p-DMAB. A direct correlation has been found between the color change intensity and the concentration of adulterants. A calibration curve was constructed to find color intensity and subsequent starch and urea concentration. The device has low-cost production and easy disposability, which make it highly suitable for widespread adoption, especially in resource-constrained settings. Moreover, a smartphone application has been developed to detect, capture, and analyze the change in color intensity due to the presence of adulterants in the milk. The low-cost nature of the smartphone-integrated paper-based sensor, coupled with its integration with smartphones, makes it an attractive solution for widespread use. They are affordable, simple to use, and do not require specialized training, making them ideal tools for regulatory bodies and concerned consumers.Keywords: paper based microfluidic device, milk adulteration, urea detection, starch detection, smartphone application
Procedia PDF Downloads 65624 Sol-Gel Derived 58S Bioglass Substituted by Li and Mg: A Comparative Evaluation on in vitro Bioactivity, MC3T3 Proliferation and Antibacterial Efficiency
Authors: Amir Khaleghipour, Amirhossein Moghanian, Elhamalsadat Ghaffari
Abstract:
Modified bioactive glass has been considered as a promising multifunctional candidate in bone repair and regeneration due to its attractive properties. The present study mainly aims to evaluate how the individual substitution of lithium (L-BG) and magnesium (M-BG) for calcium can affect the in vitro bioactivity of sol-gel derived substituted 58S bioactive glass (BG); and to present one composition in both of the 60SiO₂–(36-x)CaO–4P₂O₅–(x)Li₂O and 60SiO₂–(36-x)CaO–4P₂O₅–(x)MgO quaternary systems (where x= 0, 5, 10 mol.%) with improved biocompatibility, enhanced alkaline phosphatase (ALP) activity, and the most efficient antibacterial activity against methicillin-resistant Staphylococcus aureus bacteria. To address these aims, and study the effect of CaO/Li₂O and CaO/MgO substitution up to 10 mol % in 58S-BGs, the samples were characterized by X-ray diffraction, Fourier transform infrared spectroscopy, inductively coupled plasma atomic emission spectrometry and scanning electron microscopy after immersion in simulated body fluid up to 14 days. Results indicated that substitution of either CaO/ Li₂O and CaO/ MgO had a retarding effect on in vitro hydroxyapatite (HA) formation due to the lower supersaturation degree for nucleation of HA compared with 58s-BG. Meanwhile, magnesium had a more pronounced effect. The 3-(4, 5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and alkaline phosphatase (ALP) assays showed that both substitutions of CaO/ Li₂O and CaO/ MgO up to 5mol % in 58s-BGs led to increased biocompatibility and stimulated proliferation of the pre-osteoblast MC3T3 cells with respect to the control. On the other hand, substitution of either Li or Mg for Ca in the 58s BG composition resulted in improved bactericidal efficiency against MRSA bacteria. Taken together, sample 58s-BG with 5 mol % CaO/Li₂O substitution (BG-5L) was considered as a multifunctional biomaterial in bone repair/regeneration with improved biocompatibility, enhanced ALP activity as well enhanced antibacterial efficiency against methicillin-resistant Staphylococcus aureus (MRSA) bacteria among all of the synthesized L-BGs and M-BGs.Keywords: alkaline, alkaline earth, bioactivity, biomedical applications, sol-gel processes
Procedia PDF Downloads 190623 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 20622 Improving the Penalty-free Multi-objective Evolutionary Design Optimization of Water Distribution Systems
Authors: Emily Kambalame
Abstract:
Water distribution networks necessitate many investments for construction, prompting researchers to seek cost reduction and efficient design solutions. Optimization techniques are employed in this regard to address these challenges. In this context, the penalty-free multi-objective evolutionary algorithm (PFMOEA) coupled with pressure-dependent analysis (PDA) was utilized to develop a multi-objective evolutionary search for the optimization of water distribution systems (WDSs). The aim of this research was to find out if the computational efficiency of the PFMOEA for WDS optimization could be enhanced. This was done by applying real coding representation and retaining different percentages of feasible and infeasible solutions close to the Pareto front in the elitism step of the optimization. Two benchmark network problems, namely the Two-looped and Hanoi networks, were utilized in the study. A comparative analysis was then conducted to assess the performance of the real-coded PFMOEA in relation to other approaches described in the literature. The algorithm demonstrated competitive performance for the two benchmark networks by implementing real coding. The real-coded PFMOEA achieved the novel best-known solutions ($419,000 and $6.081 million) and a zero-pressure deficit for the two networks, requiring fewer function evaluations than the binary-coded PFMOEA. In previous PFMOEA studies, elitism applied a default retention of 30% of the least cost-feasible solutions while excluding all infeasible solutions. It was found in this study that by replacing 10% and 15% of the feasible solutions with infeasible ones that are close to the Pareto front with minimal pressure deficit violations, the computational efficiency of the PFMOEA was significantly enhanced. The configuration of 15% feasible and 15% infeasible solutions outperformed other retention allocations by identifying the optimal solution with the fewest function evaluationKeywords: design optimization, multi-objective evolutionary, penalty-free, water distribution systems
Procedia PDF Downloads 62621 Role of Onion Extract for Neuro-Protection in Experimental Stroke Model
Authors: Richa Shri, Varinder Singh, Kundan Singh Bora, Abhishek Bhanot, Rahul Kumar, Amit Kumar, Ravinder Kaur
Abstract:
The term ‘neuroprotection’ means preserving/salvaging function and structure of neurons. Neuroprotection is an adjunctive treatment option for neurodegenerative disorders. Oxidative stress is considered a major culprit in neurodegenerative disorders; hence, management strategies include use of antioxidants. Our search for a neuroprotective agent began with Allium cepa L. or onions, (family Amaryllidaceae) - a potent antioxidant. We have investigated the neuroprotective potential of onions in experimental models of ischemic stroke, diabetic neuropathy, neuropathic pain, and dementia. In pre and post-ischemic stroke model, the methanol extract of outer scales of onion bulbs (MEOS) prevented memory loss and motor in-coordination; reduced oxidative stress and cerebral infarct size. This also prevented and ameliorated diabetic neuropathy in mice. The MEOS was fractionated to yield a flavonoid rich fraction (FRF) that successfully reversed ischemia-reperfusion induced neuronal damage, thereby demonstrating that the flavonoids are responsible for the activity. The FRF effectively ameliorated chronic constriction induced neuropathic pain in rats. The FRF was subjected to bioactivity-guided fractionated. It was seen that FRF is more effective as compared to the isolated components probably due to synergism among the constituents (i.e., quercetin and quercetin glucosides) in the FRF. The outer scales of onion bulbs have great potential for prevention as well as for treatment of neuronal disorders. Red onions, with higher amounts of flavonoids as compared to the white onions, produced more significant neuroprotection. Thus, the standardized FRF from the waste material of a commonly used vegetable, especially the red variety, may be developed as a valuable neuroprotective agent.Keywords: Allium cepa, antioxidant activity, flavonoid rich fraction, neuroprotection
Procedia PDF Downloads 152620 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach
Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde
Abstract:
Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment
Procedia PDF Downloads 215619 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study
Authors: Mohammed H. Karim, Bestoon M. Faraj
Abstract:
Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope
Procedia PDF Downloads 98618 A Parallel Computation Based on GPU Programming for a 3D Compressible Fluid Flow Simulation
Authors: Sugeng Rianto, P.W. Arinto Yudi, Soemarno Muhammad Nurhuda
Abstract:
A computation of a 3D compressible fluid flow for virtual environment with haptic interaction can be a non-trivial issue. This is especially how to reach good performances and balancing between visualization, tactile feedback interaction, and computations. In this paper, we describe our approach of computation methods based on parallel programming on a GPU. The 3D fluid flow solvers have been developed for smoke dispersion simulation by using combinations of the cubic interpolated propagation (CIP) based fluid flow solvers and the advantages of the parallelism and programmability of the GPU. The fluid flow solver is generated in the GPU-CPU message passing scheme to get rapid development of haptic feedback modes for fluid dynamic data. A rapid solution in fluid flow solvers is developed by applying cubic interpolated propagation (CIP) fluid flow solvers. From this scheme, multiphase fluid flow equations can be solved simultaneously. To get more acceleration in the computation, the Navier-Stoke Equations (NSEs) is packed into channels of texel, where computation models are performed on pixels that can be considered to be a grid of cells. Therefore, despite of the complexity of the obstacle geometry, processing on multiple vertices and pixels can be done simultaneously in parallel. The data are also shared in global memory for CPU to control the haptic in providing kinaesthetic interaction and felling. The results show that GPU based parallel computation approaches provide effective simulation of compressible fluid flow model for real-time interaction in 3D computer graphic for PC platform. This report has shown the feasibility of a new approach of solving the compressible fluid flow equations on the GPU. The experimental tests proved that the compressible fluid flowing on various obstacles with haptic interactions on the few model obstacles can be effectively and efficiently simulated on the reasonable frame rate with a realistic visualization. These results confirm that good performances and balancing between visualization, tactile feedback interaction, and computations can be applied successfully.Keywords: CIP, compressible fluid, GPU programming, parallel computation, real-time visualisation
Procedia PDF Downloads 432617 Boundary Layer Control Using a Magnetic Field: A Case Study in the Framework of Ferrohydrodynamics
Authors: C. F. Alegretti, F. R. Cunha, R. G. Gontijo
Abstract:
This work investigates the effects of an applied magnetic field on the geometry-driven boundary layer detachment flow of a ferrofluid over a sudden expansion. Both constitutive equation and global magnetization equation for a ferrofluid are considered. Therefore, the proposed formulation consists in a coupled magnetic-hydrodynamic problem. Computational simulations are carried out in order to explore, not only the viability to control flow instabilities, but also to evaluate the consistency of theoretical aspects. The unidirectional sudden expansion in a ferrofluid flow is investigated numerically under the perspective of Ferrohydrodynamics in a two-dimensional domain using a Finite Differences Method. The boundary layer detachment induced by the sudden expansion results in a recirculating zone, which has been extensively studied in non-magnetic hydrodynamic problems for a wide range of Reynolds numbers. Similar investigations can be found in literature regarding the sudden expansion under the magnetohydrodynamics framework, but none considering a colloidal suspension of magnetic particles out of the superparamagnetic regime. The vorticity-stream function formulation is implemented and results in a clear coupling between the flow vorticity and its magnetization field. Our simulations indicate a systematic decay on the length of the recirculation zone as increasing physical parameters of the flow, such as the intensity of the applied field and the volume fraction of particles. The results all are discussed from a physical point of view in terms of the dynamical non-dimensional parameters. We argue that the decrease/reduction in the recirculation region of the flow is a direct consequence of the magnetic torque balancing the action of the torque produced by viscous and inertial forces of the flow. For the limit of small Reynolds and magnetic Reynolds parameters, the diffusion of vorticity balances the diffusion of the magnetic torque on the flow. These mechanics control the growth of the recirculation region.Keywords: boundary layer detachment, ferrofluid, ferrohydrodynamics, magnetization, sudden expansion
Procedia PDF Downloads 203616 Stability Analysis of Slopes during Pile Driving
Authors: Yeganeh Attari, Gudmund Reidar Eiksund, Hans Peter Jostad
Abstract:
In Geotechnical practice, there is no standard method recognized by the industry to account for the reduction of safety factor of a slope as an effect of soil displacement and pore pressure build-up during pile installation. Pile driving disturbs causes large strains and generates excess pore pressures in a zone that can extend many diameters from the installed pile, resulting in a decrease of the shear strength of the surrounding soil. This phenomenon may cause slope failure. Moreover, dissipation of excess pore pressure set-up may cause weakening of areas outside the volume of soil remoulded during installation. Because of complex interactions between changes in mean stress and shearing, it is challenging to predict installation induced pore pressure response. Furthermore, it is a complex task to follow the rate and path of pore pressure dissipation in order to analyze slope stability. In cohesive soils it is necessary to implement soil models that account for strain softening in the analysis. In the literature, several cases of slope failure due to pile driving activities have been reported, for instance, a landslide in Gothenburg that resulted in a slope failure destroying more than thirty houses and Rigaud landslide in Quebec which resulted in loss of life. Up to now, several methods have been suggested to predict the effect of pile driving on total and effective stress, pore pressure changes and their effect on soil strength. However, this is still not well understood or agreed upon. In Norway, general approaches applied by geotechnical engineers for this problem are based on old empirical methods with little accurate theoretical background. While the limitations of such methods are discussed, this paper attempts to capture the reduction in the factor of safety of a slope during pile driving, using coupled Finite Element analysis and cavity expansion method. This is demonstrated by analyzing a case of slope failure due to pile driving in Norway.Keywords: cavity expansion method, excess pore pressure, pile driving, slope failure
Procedia PDF Downloads 151615 Comparative Numerical Simulations of Reaction-Coupled Annular and Free-Bubbling Fluidized Beds Performance
Authors: Adefarati Oloruntoba, Yongmin Zhang, Hongliang Xiao
Abstract:
An annular fluidized bed (AFB) is gaining extensive application in the process industry due to its efficient gas-solids contacting. But a direct evaluation of its reaction performance is still lacking. In this paper, comparative 3D Euler–Lagrange multiphase-particle-in-cell (MP-PIC) computations are performed to assess the reaction performance of AFB relative to a bubbling fluidized bed (BFB) in an FCC regeneration process. By using the energy-minimization multi-scale (EMMS) drag model with a suitable heterogeneity index, the MP-PIC simulation predicts the typical fountain region in AFB and solids holdup of BFB, which is consistent with an experiment. Coke combustion rate, flue gas and temperature profile are utilized as the performance indicators, while related bed hydrodynamics are explored to account for the different performance under varying superficial gas velocities (0.5 m/s, 0.6 m/s, and 0.7 m/s). Simulation results indicate that the burning rates of coke and its species are relatively the same in both beds, albeit marginal increase in BFB. Similarly, the shape and evolution time of flue gas (CO, CO₂, H₂O and O₂) curves are indistinguishable but match the coke combustion rates. However, AFB has high proclivity to high temperature-gradient as higher gas and solids temperatures are predicted in the freeboard. Moreover, for both beds, the effect of superficial gas velocity is only conspicuous on the temperature but negligible on combustion efficiency and effluent gas emissions due to constant gas volumetric flow rate and bed loading criteria. Cross-flow of solids from the annulus to the spout region as well as the high primary gas in the AFB directly assume the underlying mechanisms for its unique gas-solids hydrodynamics (pressure, solids holdup, velocity, mass flux) and local spatial homogeneity, which in turn influence the reactor performance. Overall, the study portrays AFB as a cheap alternative reactor to BFB for catalyst regeneration.Keywords: annular fluidized bed, bubbling fluidized bed, coke combustion, flue gas, fountaining, CFD, MP-PIC, hydrodynamics, FCC regeneration
Procedia PDF Downloads 163614 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 126613 Relevance Of Cognitive Rehabilitation Amongst Children Having Chronic Illnesses – A Theoretical Analysis
Authors: Pulari C. Milu Maria Anto
Abstract:
Background: Cognitive Rehabilitation/Retraining has been variously used in the research literature to represent non-pharmacological interventions that target the cognitive impairments with the goal of ameliorating cognitive function and functional behaviors to optimize the quality of life. Along with adult’s cognitive impairments, the need to address acquired cognitive impairments (due to any chronic illnesses like CHD - congenital heart diseases or ALL - Acute Lymphoblastic Leukemia) among child populations is inevitable. Also, it has to be emphasized as same we consider the cognitive impairments seen in the children having neurodevelopmental disorders. Methods: All published brain image studies (Hermann, B. et al,2002, Khalil, A. et al., 2004, Follin, C. et al, 2016, etc.) and studies emphasizing cognitive impairments in attention, memory, and/or executive function and behavioral aspects (Henkin, Y. et al,2007, Bellinger, D. C., & Newburger, J. W. (2010), Cheung, Y. T., et al,2016, that could be identified were reviewed. Based on a systematic review of the literature from (2000 -2021) different brain imaging studies, increased risk of neuropsychological and psychosocial impairments are briefly described. Clinical and research gap in the area is discussed. Results:30 papers, both Indian studies and foreign publications (Sage journals, Delhi psychiatry journal, Wiley Online Library, APA PsyNet, Springer, Elsevier, Developmental medicine, and child neurology), were identified. Conclusions: In India, a very limited number of brain imaging studies and neuropsychological studies have done by indicating the cognitive deficits of a child having or undergone chronic illness. None of the studies have emphasized the relevance nor the need of implementingCR among such children, even though its high time to address but still not established yet. The review of the current evidence is to bring out an insight among rehabilitation professionals in establishing a child specific CR and to publish new findings regarding the implementation of CR among such children. Also, this study will be an awareness on considering cognitive aspects of a child having acquired cognitive deficit (due to chronic illness), especially during their critical developmental period.Keywords: cognitive rehabilitation, neuropsychological impairments, congenital heart diseases, acute lymphoblastic leukemia, epilepsy, and neuroplasticity
Procedia PDF Downloads 180612 The Fragility of Sense: The Twofold Temporality of Embodiment and Its Role for Depression
Authors: Laura Bickel
Abstract:
This paper aims to investigate to what extent Merleau-Ponty’s philosophy of body memory serves as a viable resource for the enactive approach to cognitive science and its first-person experience-based research on ‘recurrent depressive disorder’ coded F33 in ICD-10. In pursuit of this goal, the analysis begins by revisiting the neuroreductive paradigm. This paradigm serves biological psychiatry to explain the condition of vital contact in terms of underlying neurophysiological mechanisms. It is demonstrated that the neuroreductive model cannot sufficiently account for the depressed person’s episodical withdrawal in causal terms. The analysis of the irregular loss of vital resonance requires integrating the body as the subject of experience and its phenomenological time. Then, it is shown that the enactive approach to depression as disordered sense-making is a promising alternative. The enactive model of perception implies that living beings do not register pre-existing meaning ‘out there’ but unfold ‘sense’ in their action-oriented response to the world. For the enactive approach, Husserl’s passive synthesis of inner time consciousness is fundamental for what becomes perceptually present for action. It seems intuitive to bring together the enactive approach to depression with the long-standing view in phenomenological psychopathology that explains the loss of vital contact by appealing to the disruption of the temporal structure of consciousness. However, this paper argues that the disruption of the temporal structure is not justified conceptually. Instead, one may integrate Merleau-Ponty’s concept of the past as the unconscious into the enactive approach to depression. From this perspective, the living being’s experiential and biological past inserts itself in the form of habit and bodily skills and ensures action-oriented responses to the environment. Finally, it is concluded that the depressed person’s withdrawal indicates the impairment of this application process. The person suffering from F33 cannot actualize sedimented meaning to respond to the valences and tasks of a given situation.Keywords: depression, enactivism, neuroreductionsim, phenomenology, temporality
Procedia PDF Downloads 132611 GIS Mapping of Sheep Population and Distribution Pattern in the Derived Savannah of Nigeria
Authors: Sosina Adedayo O., Babyemi Olaniyi J.
Abstract:
The location, population, and distribution pattern of sheep are severe challenges to agribusiness investment and policy formulation in the livestock industry. There is a significant disconnect between farmers' needs and the policy framework towards ameliorating the sheep production constraints. Information on the population, production, and distribution pattern of sheep remains very scanty. A multi-stage sampling technique was used to elicit information from 180 purposively selected respondents from the study area comprised of Oluyole, Ona-ara, Akinyele, Egbeda, Ido and Ibarapa East LGA. The Global Positioning Systems (GPS) of the farmers' location (distribution), and average sheep herd size (Total Livestock Unit, TLU) (population) were recorded, taking the longitude and latitude of the locations in question. The recorded GPS data of the study area were transferred into the ARC-GIS. The ARC-GIS software processed the data using the ARC-GIS model 10.0. Sheep production and distribution (TLU) ranged from 4.1 (Oluyole) to 25.0 (Ibarapa East), with Oluyole, Akinyele, Ona-ara and Egbeda having TLU of 5, 7, 8 and 20, respectively. The herd sizes were classified as less than 8 (smallholders), 9-25 (medium), 26-50 (large), and above 50 (commercial). The majority (45%) of farmers were smallholders. The FR CP (%) ranged from 5.81±0.26 (cassava leaf) to 24.91±0.91 (Amaranthus spinosus), NDF (%) ranged from 22.38±4.43 (Amaranthus spinosus) to 67.96 ± 2.58 (Althemanthe dedentata) while ME ranged from 7.88±0.24 (Althemanthe dedentata) to 10.68±0.18 (cassava leaf). The smallholders’ sheep farmers were the majority, evenly distributed across rural areas due to the availability of abundant feed resources (crop residues, tree crops, shrubs, natural pastures, and feed ingredients) coupled with a large expanse of land in the study area. Most feed resources available were below sheep protein requirement level, hence supplementation is necessary for productivity. Bio-informatics can provide relevant information for sheep production for policy framework and intervention strategies.Keywords: sheep enterprise, agribusiness investment, policy, bio-informatics, ecological zone
Procedia PDF Downloads 82610 Investigating the Editing's Effect of Advertising Photos on the Virtual Purchase Decision Based on the Quantitative Electroencephalogram (EEG) Parameters
Authors: Parya Tabei, Maryam Habibifar
Abstract:
Decision-making is an important cognitive function that can be defined as the process of choosing an option among available options to achieve a specific goal. Consumer ‘need’ is the main reason for purchasing decisions. Human decision-making while buying products online is subject to various factors, one of which is the quality and effect of advertising photos. Advertising photo editing can have a significant impact on people's virtual purchase decisions. This technique helps improve the quality and overall appearance of photos by adjusting various aspects such as brightness, contrast, colors, cropping, resizing, and adding filters. This study, by examining the effect of editing advertising photos on the virtual purchase decision using EEG data, tries to investigate the effect of edited images on the decision-making of customers. A group of 30 participants were asked to react to 24 edited and unedited images while their EEG was recorded. Analysis of the EEG data revealed increased alpha wave activity in the occipital regions (O1, O2) for both edited and unedited images, which is related to visual processing and attention. Additionally, there was an increase in beta wave activity in the frontal regions (FP1, FP2, F4, F8) when participants viewed edited images, suggesting involvement in cognitive processes such as decision-making and evaluating advertising content. Gamma wave activity also increased in various regions, especially the frontal and parietal regions, which are associated with higher cognitive functions, such as attention, memory, and perception, when viewing the edited images. While the visual processing reflected by alpha waves remained consistent across different visual conditions, editing advertising photos appeared to boost neural activity in frontal and parietal regions associated with decision-making processes. These Findings suggest that photo editing could potentially influence consumer perceptions during virtual shopping experiences by modulating brain activity related to product assessment and purchase decisions.Keywords: virtual purchase decision, advertising photo, EEG parameters, decision Making
Procedia PDF Downloads 50609 Trend of Foot and Mouth Disease and Adopted Control Measures in Limpopo Province during the Period 2014 to 2020
Authors: Temosho Promise Chuene, T. Chitura
Abstract:
Background: Foot and mouth disease is a real challenge in South Africa. The disease is a serious threat to the viability of livestock farming initiatives and affects local and international livestock trade. In Limpopo Province, the Kruger National Park and other game reserves are home to the African buffalo (Syncerus caffer), a notorious reservoir of the picornavirus, which causes foot and mouth disease. Out of the virus’s seven (7) distinct serotypes, Southern African Territories (SAT) 1, 2, and 3 are commonly endemic in South Africa. The broad objective of the study was to establish the trend of foot and mouth disease in Limpopo Province over a seven-year period (2014-2020), as well as the adoption and comprehensive reporting of the measures that are taken to contain disease outbreaks in the study area. Methods: The study used secondary data from the World Organization for Animal Health (WOAH) on reported cases of foot and mouth disease in South Africa. Descriptive analysis (frequencies and percentages) and Analysis of variance (ANOVA) were used to present and analyse the data. Result: The year 2020 had the highest prevalence of foot and mouth disease (3.72%), while 2016 had the lowest prevalence (0.05%). Serotype SAT 2 was the most endemic, followed by SAT 1. Findings from the study demonstrated the seasonal nature of foot and mouth disease in the study area, as most disease cases were reported in the summer seasons. Slaughter of diseased and at-risk animals was the only documented disease control strategy, and information was missing for some of the years. Conclusion: The study identified serious underreporting of the adopted control strategies following disease outbreaks. Adoption of comprehensive disease control strategies coupled with thorough reporting can help to reduce outbreaks of foot and mouth disease and prevent losses to the livestock farming sector of South Africa and Limpopo Province in particular.Keywords: livestock farming, African buffalo, prevalence, serotype, slaughter
Procedia PDF Downloads 64608 Organic Carbon Pools Fractionation of Lacustrine Sediment with a Stepwise Chemical Procedure
Authors: Xiaoqing Liu, Kurt Friese, Karsten Rinke
Abstract:
Lacustrine sediment archives rich paleoenvironmental information in lake and surrounding environment. Additionally, modern sediment is used as an effective medium for the monitoring of lake. Organic carbon in sediment is a heterogeneous mixture with varying turnover times and qualities which result from the different biogeochemical processes in the deposition of organic material. Therefore, the isolation of different carbon pools is important for the research of lacustrine condition in the lake. However, the numeric available fractionation procedures can hardly yield homogeneous carbon pools on terms of stability and age. In this work, a multi-step fractionation protocol that treated sediment with hot water, HCl, H2O2 and Na2S2O8 in sequence was adopted, the treated sediment from each step were analyzed for the isotopic and structural compositions with Isotope Ratio Mass Spectrometer coupled with element analyzer (IRMS-EA) and Solid-state 13C Nuclear Magnetic Resonance (NMR), respectively. The sequential extractions with hot-water, HCl, and H2O2 yielded a more homogeneous and C3 plant-originating OC fraction, which was characterized with an atomic C/N ratio shift from 12.0 to 20.8, and 13C and 15N isotopic signatures were 0.9‰ and 1.9‰ more depleted than the original bulk sediment, respectively. Additionally, the H2O2- resistant residue was dominated with stable components, such as the lignins, waxes, cutans, tannins, steroids and aliphatic proteins and complex carbohydrates. 6M HCl in the acid hydrolysis step was much more effective than 1M HCl to isolate a sedimentary OC fraction with higher degree of homogeneity. Owing to the extremely high removal rate of organic matter, the step of a Na2S2O8 oxidation is only suggested if the isolation of the most refractory OC pool is mandatory. We conclude that this multi-step chemical fractionation procedure is effective to isolate more homogeneous OC pools in terms of stability and functional structure, and it can be used as a promising method for OC pools fractionation of sediment or soil in future lake research.Keywords: 13C-CPMAS-NMR, 13C signature, lake sediment, OC fractionation
Procedia PDF Downloads 299607 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces
Authors: Matthias Steffan, Franz Haas
Abstract:
The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding
Procedia PDF Downloads 283606 Representation of History in Cinema: Comparative Analysis of Turkish Films Based on the Conquest of Istanbul
Authors: Dilara Balcı Gulpinar
Abstract:
History, which can be defined as the narrative of the past, is a process of reproduction that takes place in current time. Scientificness of historiography is controversial for reasons such as the fact that the historian makes choices and comments; even the reason for choosing the subject distracts him/her from objectivity. Historians may take advantage of the current values, cannot be able to afford to contradict society and/or face pressures of dominant groups. In addition, due to the lack of documentation, interpretation, and fiction are used to integrate historical events that seem disconnected. In this respect, there are views that relate history to narrative arts rather than positive sciences. Popular historical films, which are visual historical representations, appeal to wider audiences by taking advantage of visuality, dramatic fictional narrative, various effects, music, stars, and other populist elements. Historical film, which does not claim to be scientific and even has the freedom to distort historical reality, can be perceived as reality itself and becomes an indispensable resource for individual and social memory. The ideological discourse of popular films is not only impressive and manipulative but also changeable. Socio-cultural and political changes can transform the representation of history in films extremely sharply and rapidly. In accordance with the above-mentioned hypothesis, this study is aimed at examining Turkish historical films about the conquest of Istanbul, using methods of historical and social analysis. İstanbul’un Fethi (Conquest of Istanbul, Aydin Arakon, 1953), Kuşatma Altında Aşk (Love Under Siege, Ersin Pertan, 1997) and Fetih 1453 (Conquest 1453, Faruk Aksoy, 2012) are the only three films in Turkish cinema that revolve around the said conquest, therefore constituting the sample of this study. It has been determined that real and fictional events, as well as characters, both focused and ignored, differ from one another in each film. Such significant differences in the dramatic and cinematographic structure of these three films shot respectively in the 50s, 90s, and 2010s show that the representation of history in popular cinema has altered throughout the years, losing its aspect of objectivity.Keywords: cinema, conquest of Istanbul, historical film, representation
Procedia PDF Downloads 135605 Methodologies, Findings, Discussion, and Limitations in Global, Multi-Lingual Research: We Are All Alone - Chinese Internet Drama
Authors: Patricia Portugal Marques de Carvalho Lourenco
Abstract:
A three-phase methodological multi-lingual path was designed, constructed and carried out using the 2020 Chinese Internet Drama Series We Are All Alone as a case study. Phase one, the backbone of the research, comprised of secondary data analysis, providing the structure on which the next two phases would be built on. Phase one incorporated a Google Scholar and a Baidu Index analysis, Star Network Influence Index and Mydramalist.com top two drama reviews, along with an article written about the drama and scrutiny of Chinese related blogs and websites. Phase two was field research elaborated across Latin Europe, and phase three was social media focused, having into account that perceptions are going to be memory conditioned based on past ideas recall. Overall, research has shown the poor cultural expression of Chinese entertainment in Latin Europe and demonstrated the inexistence of Chinese content in French, Italian, Portuguese and Spanish Business to Consumer retailers; a reflection of their low significance in Latin European markets and the short-life cycle of entertainment products in general, bubble-gum, disposable goods without a mid to long-term effect in consumers lives. The process of conducting comprehensive international research was complex and time-consuming, with data not always available in Mandarin, the researcher’s linguistic deficiency, limited Chinese Cultural Knowledge and cultural equivalence. Despite steps being taken to minimize the international proposed research, theoretical limitations concurrent to Latin Europe and China still occurred. Data accuracy was disputable; sampling, data collection/analysis methods are heterogeneous; ascertaining data requirements and the method of analysis to achieve a construct equivalence was challenging and morose to operationalize. Secondary data was also not often readily available in Mandarin; yet, in spite of the array of limitations, research was done, and results were produced.Keywords: research methodologies, international research, primary data, secondary data, research limitations, online dramas, china, latin europe
Procedia PDF Downloads 68604 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss
Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen
Abstract:
Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking
Procedia PDF Downloads 326603 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area
Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma
Abstract:
The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty
Procedia PDF Downloads 89602 Talking Back to Hollywood: Museum Representation in Popular Culture as a Gateway to Understanding Public Perception
Authors: Jessica BrodeFrank, Beka Bryer, Lacey Wilson, Sierra Van Ryck deGroot
Abstract:
Museums are enjoying quite the moment in pop culture. From discussions of labor in Bob’s Burger to introducing cultural repatriation in The Black Panther, discussions of various museum issues are making their way to popular media. “Talking Back to Hollywood” analyzes the impact museums have on movies and television. The paper will highlight a series of cultural cameos and discuss what each reveals about critical themes in museums: repatriation, labor, obfuscated histories, institutional legacies, artificial intelligence, and holograms. Using a mixed methods approach to include surveys, descriptive research, thematic analysis, and context analysis, the authors of this paper will explore how we, as the museum staff, might begin to cite museums and movies together as texts. Drawing from their experience working in museums and public history, this contingent of mid-career professionals will highlight the impact museums have had on movies and television and the didactic lessons these portrayals can provide back to cultural heritage professionals. From tackling critical themes in museums such as repatriation, labor conditions/inequities, obfuscated histories, curatorial choice and control, institutional legacies, and more, this paper is grounded in the cultural zeitgeist of the 2000s and the message these media portrayals send to the public and the cultural heritage sector. In particular, the paper will examine how portrayals of AI, holograms, and more technology can be used as entry points for necessary discussions with the public on mistrust, misinformation, and emerging technologies. This paper will not only expose the legacy and cultural understanding of the museum field within popular culture but also will discuss actionable ways that public historians can use these portrayals as an entry point for discussions with the public, citing literature reviews and quantitative and qualitative analysis of survey results. As Hollywood is talking about museums, museums can use that to better connect to the audiences who feel comfortable at the cinema but are excluded from the museum.Keywords: museums, public memory, representation, popular culture
Procedia PDF Downloads 83