Search results for: outermost stationary master robots
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1181

Search results for: outermost stationary master robots

221 The MicroRNA-2110 Suppressed Cell Proliferation and Migration Capacity in Hepatocellular Carcinoma Cells

Authors: Pelin Balcik Ercin

Abstract:

Introduction: ZEB transcription factor family member ZEB2, has a role in epithelial to mesenchymal transition during development and metastasis. The altered circulating extracellular miRNAs expression is observed in diseases, and extracellular miRNAs have an important role in cancer cell microenvironment. In ChIP-Seq study, the expression of miR-2110 was found to be regulated by ZEB2. In this study, the effects of miR2110 on cell proliferation and migration of hepatocellular carcinoma (HCC) cells were examined. Material and Methods: SNU398 cells transfected with mimic miR2110 (20nM) (HMI0375, Sigma-Aldrich) and negative control miR (HMC0002, Sigma-Aldrich). MicroRNA isolation was accomplished with miRVANA isolation kit according to manufacturer instructions. cDNA synthesis was performed expression, respectively, and calibrated with Ct of controls. The real-time quantitative PCR (RT-qPCR) reaction was performed using the TaqMan Fast Advanced Master Mix (Thermo Sci.). Ct values of miR2110 were normalized to miR-186-5p and miR16-5p for the intracellular gene. Cell proliferation analysis was analyzed with the xCELLigence RTCA System. Wound healing assay was analyzed with the ImageJ program and relative fold change calculated. Results: The mimic-miR-2110 transfected SNU398 cells nearly nine-fold (log2) more miR-2110 expressed compared to negative control transfected cells. The mimic-miR-2110 transfected HCC cell proliferation significantly inhibited compared to the negative control cells. Furthermore, miR-2110-SNU398 cell migration capacity was relatively four-fold decreased compared to negative control-miR-SNU398 cells. Conclusion: Our results suggest the miR-2110 inhibited cell proliferation and also miR-2110 negatively affect cell migration compared to control groups in HCC cells. These data suggest the complexity of microRNA EMT transcription factors regulation. These initial results are pointed out the predictive biomarker capacity of miR-2110 in HCC.

Keywords: epithelial to mesenchymal transition, EMT, hepatocellular carcinoma cells, micro-RNA-2110, ZEB2

Procedia PDF Downloads 126
220 Contextualization and Localization: Acceptability of the Developed Activity Sheets in Science 5 Integrating Climate Change Adaptation

Authors: Kim Alvin De Lara

Abstract:

The research aimed to assess the level of acceptability of the developed activity sheets in Science 5 integrating climate change adaptation of grade 5 science teachers in the District of Pililla school year 2016-2017. In this research, participants were able to recognize and understand the importance of environmental education in improving basic education and integrating them in lessons through localization and contextualization. The researcher conducted the study to develop a material to use by Science teachers in Grade 5. It served also as a self-learning resource for students. The respondents of the study were the thirteen Grade 5 teachers teaching Science 5 in the District of Pililla. Respondents were selected purposively and identified by the researcher. A descriptive method of research was utilized in the research. The main instrument was a checklist which includes items on the objectives, content, tasks, contextualization and localization of the developed activity sheets. The researcher developed a 2-week lesson in Science 5 for 4th Quarter based on the curriculum guide with integration of climate change adaptation. The findings revealed that majority of respondents are female, 31 years old and above, 10 years above in teaching science and have units in master’s degree. With regards to the level of acceptability, the study revealed developed activity sheets in science 5 is very much acceptable. In view of the findings, lessons in science 5 must be contextualized and localized to improve to make the curriculum responds, conforms, reflects, and be flexible to the needs of the learners, especially the 21st century learners who need to be holistically and skillfully developed. As revealed by the findings, it is more acceptable to localized and contextualized the learning materials for pupils. Policy formation and re-organization of the lessons and competencies in Science must be reviewed and re-evaluated. Lessons in science must also be integrated with climate change adaptation since nowadays, people are experiencing change in climate due to global warming and other factors. Through developed activity sheets, researcher strongly supports environmental education and believes this to serve as a way to instill environmental literacy to students.

Keywords: activity sheets, climate change adaptation, contextualization, localization

Procedia PDF Downloads 330
219 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 207
218 Quality Assurance in Higher Education: Doha Institute for Graduate Studies as a Case Study

Authors: Ahmed Makhoukh

Abstract:

Quality assurance (QA) has recently become a common practice, which is endorsed by most Higher Education (HE) institutions worldwide, due to the pressure of internal and external forces. One of the aims of this quality movement is to make the contribution of university education to socio-economic development highly significant. This entails that graduates are currently required have a high-quality profile, i.e., to be competent and master the 21st-century skills needed in the labor market. This wave of change, mostly imposed by globalization, has the effect that university education should be learner-centered in order to satisfy the different needs of students and meet the expectations of other stakeholders. Such a shift of focus on the student learning outcomes has led HE institutions to reconsider their strategic planning, their mission, the curriculum, the pedagogical competence of the academic staff, among other elements. To ensure that the overall institutional performance is on the right way, a QA system should be established to assume this task of checking regularly the extent to which the set of standards of evaluation are strictly respected as expected. This operation of QA has the advantage of proving the accountability of the institution, gaining the trust of the public with transparency and enjoying an international recognition. This is the case of Doha Institute (DI) for Graduate Studies, in Qatar, the object of the present study. The significance of this contribution is to show that the conception of quality has changed in this digital age, and the need to integrate a department responsible for QA in every HE institution to ensure educational quality, enhance learners and achieve academic leadership. Thus, to undertake the issue of QA in DI for Graduate Studies, an elite university (in the academic sense) that focuses on a small and selected number of students, a qualitative method will be adopted in the description and analysis of the data (document analysis). In an attempt to investigate the extent to which QA is achieved in Doha Institute for Graduate Studies, three broad indicators will be evaluated (input, process and learning outcomes). This investigation will be carried out in line with the UK Quality Code for Higher Education represented by Quality Assurance Agency (QAA).

Keywords: accreditation, higher education, quality, quality assurance, standards

Procedia PDF Downloads 149
217 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material

Authors: Rossella Resi

Abstract:

This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.

Keywords: ICT, L2, language learning, language mediation, subtitling

Procedia PDF Downloads 417
216 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 467
215 Carl Wernicke and the Origin of Neurolinguistics in Breslau: A Case Study in the Domain of the History of Linguistics

Authors: Aneta Daniel

Abstract:

The subject of the study is the exploration of the origins and dynamics of the development of language studies, which have been labelled as neurolinguistics. It is worth mentioning that the origins of neurolinguistics are to be found in the research conducted by German scientists before the Second World War in Breslau Universität (presently Wroclaw). The dominant figure in these studies was professor Carl Wernicke, whose students continued and creatively developed projects of their master within this area. Professor Carl Wernicke, a German physician, anatomist, psychiatrist, and neuropathologist, is primarily known for his influential research on aphasia. His research, as well as those conducted by professor Paul Broca, has led to breakthroughs in the location of brain functions, particularly speech. Years later the theses of the pioneers of cognitive neurology (Carl Wernicke and Paul Broca) were developed by other neurolinguists. The main objective of the investigation is the reconstruction of the group of scientists –the students of Carl Wernicke– who contributed to the development of neurolinguistics. The scholars were mainly neurologists and psychiatrists and dealt with the branch of science that had not been named neurolinguistics at that time. The profiles of the scholars will be analysed and presented as the members of the group of researchers who have contributed to the breakthroughs in psychology and neuroscience. The research material consists of archival records documenting the research of professor Carl Wernicke and the researchers from Breslau (presently Wroclaw) which is one of the fastest growing cities in Europe. In 1870, when Carl Wernicke became the medical doctor, Breslau was full of cultural events: festivals and circus shows were held in the city center. Today we can come back to these events due to 'Breslauer Zeitung (1870)', which precisely describes all the events that took place on particular days. It is worth noting that those were the beginnings of antisemitism in Breslau. Many theses and articles that have survived in the libraries in Wroclaw and all over the world contribute to the development of neuroscience. The history of research on the brain and speech analysis, including the history of psychology and neuroscience, areas from which neurolinguistics is derived, will be presented.

Keywords: Aphasia, brain injury, Carl Wernicke, language, neurolinguistics

Procedia PDF Downloads 397
214 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 200
213 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 151
212 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus

Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi

Abstract:

In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.

Keywords: bitumen, crumb rubber, ethylene vinyl acetate, FT wax

Procedia PDF Downloads 175
211 PhD Students’ Challenges with Impact-Factor in Kazakhstan

Authors: Duishon Shamatov

Abstract:

This presentation is about Kazakhstan’s PhD students’ experiences with impact-factor publication requirement. Since the break-up of the USSR, Kazakhstan has been attempting to improve its higher education system at undergraduate and graduate levels. From March, 2010 Kazakhstan joined Bologna process and entered European space of higher education. To align with the European system of higher education, three level of preparation of specialists (undergraduate, master and PhD) was adopted to replace the Soviet system. The changes were aimed at promoting high quality higher education that meets the demands of labor market and growing needs of the industrial-innovative development of the country, and meeting the international standards. The shift to the European system has brought many benefits, but there are also some serious challenges. One of those challenges is related to the requirements for the PhD candidates to publish in national and international journals. Thus, a PhD candidate should have 7 publications in total, out of which one has to be in an international impact factor journal. A qualitative research was conducted to explore the PhD students’ views of their experiences with impact-factor publications. With the help of purposeful sampling, 30 PhD students from seven universities across Kazakhstan were selected for individual and focus group interviews. The key findings of the study are as follows. While the Kazakh PhD students have no difficulties in publishing in local journals, they face great challenges in attempting to publish in impact-factor journals for a range of reasons. They include but not limited to lack of research and publication skills, poorer knowledge of academic English, not familiarity with the peer review publication processes and expectations, and very short time to get published due to their PhD programme requirements. This situation is pushing some these young scholars explore alternative ways to get published in impact factor journals and they seek to publish by any means and often by any costs (which means even paying large sum of money for a publication). This in turn, creates a myth in the scholars’ circles in Kazakhstan, that to get published in impact factor journals, one should necessarily pay much money. This paper offers some policy recommendations on how to improve preparation of future PhD candidates in Kazakhstan.

Keywords: Bologna process, impact-factor publications, post-graduate education, Kazakhstan

Procedia PDF Downloads 381
210 Usability Testing on Information Design through Single-Lens Wearable Device

Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk

Abstract:

This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.

Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device

Procedia PDF Downloads 274
209 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper

Authors: Nicolas Bae, Theodore L. Karavasilis

Abstract:

Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.

Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design

Procedia PDF Downloads 193
208 The Numerical Model of the Onset of Acoustic Oscillation in Pulse Tube Engine

Authors: Alexander I. Dovgyallo, Evgeniy A. Zinoviev, Svetlana O. Nekrasova

Abstract:

The most of works applied for the pulse tube converters contain the workflow description implemented through the use of mathematical models on stationary modes. However, the study of the thermoacoustic systems unsteady behavior in the start, stop, and acoustic load changes modes is in the particular interest. The aim of the present study was to develop a mathematical thermal excitation model of acoustic oscillations in pulse tube engine (PTE) as a small-scale scheme of pulse tube engine operating at atmospheric air. Unlike some previous works this standing wave configuration is a fully closed system. The improvements over previous mathematical models are the following: the model allows specifying any values of porosity for regenerator, takes into account the piston weight and the friction in the cylinder and piston unit, and determines the operating frequency. The numerical method is based on the relation equations between the pressure and volume velocity variables at the ends of each element of PTE which is recorded through the appropriate transformation matrix. A solution demonstrates that the PTE operation frequency is the complex value, and it depends on the piston mass and the dynamic friction due to its movement in the cylinder. On the basis of the determined frequency thermoacoustically induced heat transport and generation of acoustic power equations were solved for channel with temperature gradient on its ends. The results of numerical simulation demonstrate the features of the initialization process of oscillation and show that that generated acoustic power more than power on the steady mode in a factor of 3…4. But doesn`t mean the possibility of its further continuous utilizing due to its existence only in transient mode which lasts only for a 30-40 sec. The experiments were carried out on small-scale PTE. The results shows that the value of acoustic power is in the range of 0.7..1.05 W for the defined frequency range f = 13..18 Hz and pressure amplitudes 11..12 kPa. These experimental data are satisfactorily correlated with the numerical modeling results. The mathematical model can be straightforwardly applied for the thermoacoustic devices with variable temperatures of thermal reservoirs and variable transduction loads which are expected to occur in practical implementations of portable thermoacoustic engines.

Keywords: nonlinear processes, pulse tube engine, thermal excitation, standing wave

Procedia PDF Downloads 380
207 Intensity of Dyspnea and Anxiety in Seniors in the Terminal Phase of the Disease

Authors: Mariola Głowacka

Abstract:

Aim: The aim of this study was to present the assessment of dyspnea and anxiety in seniors staying in the hospice in the context of the nurse's tasks. Materials and methods: The presented research was carried out at the "Hospicjum Płockie" Association of St. Urszula Ledóchowska in Płock, in a stationary ward, for adults. The research group consisted of 100 people, women, and men. In the study described in this paper, the method of diagnostic survey, the method of estimation and analysis of patient records were used, and the research tools were the numerical scale of the NRS assessment, the modified Borg scale to assess dyspnea, the Trait Anxiety scale to test the intensity of anxiety and the sociodemographic assessment of the respondent. Results: Among the patients, the greatest number were people without dyspnoea (38 people) and with average levels of dyspnoea (26 people). People with lung cancer had a higher level of breathlessness than people with other cancers. Half of the patients included in the study felt anxiety at a low level. On average, men had a higher level of anxiety than women. Conclusion: 1) Patients staying in the hospice require comprehensive nursing care due to the underlying disease, comorbidities, and a wide range of medications taken, which aggravate the feeling of dyspnea and anxiety. 2) The study showed that in patients staying in the hospice, the level of dyspnea was of varying severity. The greatest number of people were without dyspnea (38) and patients with a low level of dyspnea (34). There were 12 people experiencing an average level of dyspnea and a high level of dyspnea 15. 3) The main factor influencing the severity of dyspnea in patients was the location of cancer. There was no significant relationship between the intensity of dyspnea and the age, gender of the patient, and time from diagnosis. 4) The study showed that in patients staying in the hospice, the level of anxiety was of varying severity. Most people experience a low level of anxiety (51). There were 16 people with a high level of anxiety, while there were 33 people experiencing anxiety at an average level. 5) The patient's gender was the main factor influencing the increase in anxiety intensity. Men had higher levels of anxiety than women. There was no significant correlation between the intensity of anxiety and the age of the respondents, as well as the type of cancer and time since diagnosis. 6) The intensity of dyspnea depended on the type of cancer the subjects had. People with lung cancer had a higher level of breathlessness than those with breast cancer and bowel cancer. It was not found that the anxiety increased depending on the type of cancer and comorbidities of the examined person.

Keywords: cancer, shortness of breath, anxiety, senior, hospice

Procedia PDF Downloads 96
206 Development of Immersive Virtual Reality System for Planning of Cargo Loading Operations

Authors: Eugene Y. C. Wong, Daniel Y. W. Mo, Cosmo T. Y. Ng, Jessica K. Y. Chan, Leith K. Y. Chan, Henry Y. K. Lau

Abstract:

The real-time planning visualisation, precise allocation and loading optimisation in air cargo load planning operations are increasingly important as more considerations are needed on dangerous cargo loading, locations of lithium batteries, weight declaration and limited aircraft capacity. The planning of the unit load devices (ULD) can often be carried out only in a limited number of hours before flight departure. A dynamic air cargo load planning system is proposed with the optimisation of cargo load plan and visualisation of planning results in virtual reality systems. The system aims to optimise the cargo load planning and visualise the simulated loading planning decision on air cargo terminal operations. Adopting simulation tools, Cave Automatic Virtual Environment (CAVE) and virtual reality technologies, the results of planning with reference to weight and balance, Unit Load Device (ULD) dimensions, gateway, cargo nature and aircraft capacity are optimised and presented. The virtual reality system facilities planning, operations, education and training. Staff in terminals are usually trained in a traditional push-approach demonstration with enormous manual paperwork. With the support of newly customized immersive visualization environment, users can master the complex air cargo load planning techniques in a problem based training with the instant result being immersively visualised. The virtual reality system is developed with three-dimensional (3D) projectors, screens, workstations, truss system, 3D glasses, and demonstration platform and software. The content will be focused on the cargo planning and loading operations in an air cargo terminal. The system can assist decision-making process during cargo load planning in the complex operations of air cargo terminal operations. The processes of cargo loading, cargo build-up, security screening, and system monitoring can be further visualised. Scenarios are designed to support and demonstrate the daily operations of the air cargo terminal, including dangerous goods, pets and animals, and some special cargos.

Keywords: air cargo load planning, optimisation, virtual reality, weight and balance, unit load device

Procedia PDF Downloads 350
205 Influence of Strike-Slip Faulting in the Tectonic Evolution of North-Eastern Tunisia

Authors: Aymen Arfaoui, Abdelkader Soumaya, Ali Kadri, Noureddine Ben Ayed

Abstract:

The major contractional events characterized by strike-slip faulting, folding, and thrusting occurred in the Eocene, Late Miocene, and Quaternary along with the NE Tunisian domain between Bou Kornine-Ressas- Msella and Cap Bon Peninsula. During the Plio-Quaternary, the Grombalia and Mornag grabens show a maximum of collapse in parallelism with the NNW-SSE SHmax direction and developed as 3rd order extensive regions within a regional compressional regime. Using available tectonic and geophysical data supplemented by new fault-kinematic observations, we show that Cenozoic deformations are dominated by first order N-S faults reactivation, this sinistral wrench system is responsible for the formation of strike-slip duplexes, thrusts, folds, and grabens. Based on our new structural interpretation, the major faults of N-S Axis, Bou Kornine-Ressas-Messella (MRB), and Hammamet-Korbous (HK) form an N-S first order restraining stepover within a left-lateral strike-slip duplex. The N-S master MRB fault is dominated by contractional imbricate fans, while the parallel HK fault is characterized by a trailing of extensional imbricate fans. The Eocene and Miocene compression phases in the study area caused sinistral strike-slip reactivation of pre-existing N-S faults, reverse reactivation of NE-SW trending faults, and normal-oblique reactivation of NW-SE faults, creating a NE-SW to N-S trending system of east-verging folds and overlaps. Seismic tomography images reveal a key role for the lithospheric subvertical tear or STEP fault (Slab Transfer Edge Propagator) evidenced below this region on the development of the MRB and the HK relay zone. The presence of extensive syntectonic Pliocene sequences above this crustal scale fault may be the result of a recent lithospheric vertical motion of this STEP fault due to the rollback and lateral migration of the Calabrian slab eastward.

Keywords: Tunisia, strike-slip fault, contractional duplex, tectonic stress, restraining stepover, STEP fault

Procedia PDF Downloads 133
204 Urban Meetings: Graphic Analysis of the Public Space in a Cultural Building from São Paulo

Authors: Thalita Carvalho Martins de Castro, Núbia Bernardi

Abstract:

Currently, studies evidence that our cities are portraits of social relations. In the midst of so many segregations, cultural buildings emerge as a place to assemble collective activities and expressions. Through theater, exhibitions, educational workshops, libraries, the architecture approaches human relations and seeks to propose meeting places. The purpose of this research is to deepen the discussions about the contributions of cultural buildings in the use of the spaces of the contemporary city, based on the data and measure collected in the master's research in progress. The graphic analysis of the insertion of contemporary cultural buildings seeks to highlight the social use of space. The urban insertions of contemporary cultural buildings in the city of São Paulo (Brazil) will be analyzed to understand the relations between the architectural form and its audience. The collected data describe a dynamic of flows and the permanence in the use of these spaces, indicating the contribution of the cultural buildings, associated with artistic production, in the dynamics of urban spaces and the social modifications of their milieu. Among the case studies, the research in development is based on the registration and graphic analysis of the Praça das Artes (2012) building located in the historical central region of the city, which after a long period of great degradation undergoes a current redevelopment. The choice of this building was based on four parameters, both on the architectural scale and on the urban scale: urban insertion, local impact, cultural production and a mix of uses. For the analysis will be applied two methodologies of graphic analysis, one with diagrams accompanied by texts and another with the active analysis for open space projects using complementary graphic methodologies, with maps, plants, info-graphics, perspectives, time-lapse videos and analytical tables. This research aims to reinforce the debates between the methodologies of form-use spaces and visual synthesis applied in cultural buildings, in order that new projects can structure public spaces as catalysts for social use, generating improvements in the daily life of its users and in the cities where they are inserted.

Keywords: cultural buildings, design methodologies, graphic analysis, public spaces

Procedia PDF Downloads 308
203 Continuous Improvement of Teaching Quality through Course Evaluation by the Students

Authors: Valerie Follonier, Henrike Hamelmann, Jean-Michel Jullien

Abstract:

The Distance Learning University in Switzerland (UniDistance) is offering bachelor and master courses as well as further education programs. The professors and their assistants work at traditional Swiss universities and are giving their courses at UniDistance following a blended learning and flipped classroom approach. A standardized course evaluation by the students has been established as a component of a quality improvement process. The students’ feedback enables the stakeholders to identify areas of improvement, initiate professional development for the teaching teams and thus continuously augment the quality of instruction. This paper describes the evaluation process, the tools involved and how the approach involving all stakeholders helps forming a culture of quality in teaching. Additionally, it will present the first evaluation results following the new process. Two software tools have been developed to support all stakeholders in the process of the semi-annual formative evaluation. The first tool allows to create the survey and to assign it to the relevant courses and students. The second tool presents the results of the evaluation to the stakeholders, providing specific features for the teaching teams, the dean, the directorate and EDUDL+ (Educational development unit distance learning). The survey items were selected in accordance with the e-learning strategy of the institution and are formulated to support the professional development of the teaching teams. By reviewing the results the teaching teams become aware of the opinion of the students and are asked to write a feedback for the attention of their dean. The dean reviews the results of the faculty and writes a general report about the situation of the faculty and the possible improvements intended. Finally, EDUDL+ writes a final report summarising the evaluation results. A mechanism of adjustable warnings allows it to generate quality indicators for each module. These are summarised for each faculty and globally for the whole institution in order to increase the vigilance of the responsible. The quality process involves changing the indicators regularly to focus on different areas each semester, to facilitate the professional development of the teaching teams and to progressively augment the overall teaching quality of the institution.

Keywords: continuous improvement process, course evaluation, distance learning, software tools, teaching quality

Procedia PDF Downloads 261
202 Antecedents of Teaching Skill for Students’ Psychological Enhancement in University Lecturers

Authors: Duangduen L. Bhanthumnavin, Duchduen E. Bhanthumnavin

Abstract:

Widening gap between new academic knowledge in all areas and habit of exploring and exploiting this precious information by students causes an alarm and need for urgent prevention. At present, all advanced nations are committed to WHO’s Sustainable Development Goals (SDGs), which require some objective achievements by the year 2030 and further. The responsibility has been enforced on university lecturers, in addition to the higher education learning outcomes (HELO). The two groups of goals (SDGs and HELO) can be realized if most university instructors are capable of inculcating some important psychological characteristics and behavioral change in the new generations. Thus, this study aimed at pinpointing the significant factors for additional teaching skills of instructors regardless of the area of study. University lecturers from various parts of Thailand, with the total of 540 persons, participated in this cross-sectional study. Based on interactionism model of behavior antecedents, it covers psychological situational factors, as well as their interaction. Most measuring instruments were summated rating with 10 or more items, each accompanied by a six-point rating scale. All these measures were constructed with acceptable standards. Most of the respondents were volunteers who gave their written responses in a meeting room or conference hall. By applying Multiple Regression Analysis in the total sample as well as in the subsamples of these university instructors, about 70 to 73 predictive percentages with 4 to 6 significant predictors were found. The major dependent variable was instructor’s teaching behavior for inculcating the psycho-moral strength for academic exploration and knowledge application. By performing ANOVA, the less-active instructors were identified as the ones with lower education (Master’s level or lower), the minimal research producers, and the ones with less in-service trainings. The preventive factors for these three groups of instructors were intention to increase the students’ psychological development as well as moral development in their regular teaching classes. In addition, social support from their supervisors and coworkers was also necessary. Recommendations for further research and training are offered and welcomed.

Keywords: psychological inculcation, at-risk instructors, preventive measures, undergraduate teaching

Procedia PDF Downloads 63
201 Evaluation of Antimicrobial Susceptibility Profile of Urinary Tract Infections in Massoud Medical Laboratory: 2018-2021

Authors: Ali Ghorbanipour

Abstract:

The aim of this study is to investigate the drug resistance pattern and the value of the MIC (minimum inhibitory concentration)method to reduce the impact of infectious diseases and the slow development of resistance. Method: The study was conducted on clinical specimens collected between 2018 to 2021. identification of isolates and antibiotic susceptibility testing were performed using conventional biochemical tests. Antibiotic resistance was determined using kibry-Bauer disk diffusion and MIC by E-test methods comparative with microdilution plate elisa method. Results were interpreted according to CLSI. Results: Out of 249600 different clinical specimens, 18720 different pathogenic bacteria by overall detection ratio 7.7% were detected. Among pathogen bacterial were Gram negative bacteria (70%,n=13000) and Gram positive bacteria(30%,n=5720).Medically relevant gram-negative bacteria include a multitude of species such as E.coli , Klebsiella .spp , Pseudomonas .aeroginosa , Acinetobacter .spp , Enterobacterspp ,and gram positive bacteria Staphylococcus.spp , Enterococcus .spp , Streptococcus .spp was isolated . Conclusion: Our results highlighted that the resistance ratio among Gram Negative bacteria and Gram positive bacteria with different infection is high it suggest constant screening and follow-up programs for the detection of antibiotic resistance and the value of MIC drug susceptibility reporting that provide a new way to the usage of resistant antibiotic in combination with other antibiotics or accurate weight of antibiotics that inhibit or kill bacteria. Evaluation of wrong medication in the expansion of resistance and side effects of over usage antibiotics are goals. Ali ghorbanipour presently working as a supervision at the microbiology department of Massoud medical laboratory. Iran. Earlier, he worked as head department of pulmonary infection in firoozgarhospital, Iran. He received master degree in 2012 from Fergusson College. His research prime objective is a biologic wound dressing .to his credit, he has Published10 articles in various international congresses by presenting posters.

Keywords: antimicrobial profile, MIC & MBC Method, microplate antimicrobial assay, E-test

Procedia PDF Downloads 135
200 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)

Authors: Ziv Kassner, Gal Ribak

Abstract:

Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.

Keywords: bio-mechanics, insect flight, target fixation, tracking and interception

Procedia PDF Downloads 158
199 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 260
198 From Achilles to Chris Kyle-Militarized Masculinity and Hollywood in the Post-9/11 Era

Authors: Mary M. Park

Abstract:

Hollywood has had a long and enduring history of showcasing the United States military to civilian audiences, and the portrayals of soldiers in films have had a definite impact on the civilian perception of the US military. The growing gap between the civilian population and the military in the US has led to certain stereotypes of military personnel to proliferate, especially in the area of militarized masculinity, which has often been harmful to the psychological and spiritual wellbeing of military personnel. Examining Hollywood's portrayal of soldiers can serve to enhance our understanding of how civilians may be influenced in their perception of military personnel. Moreover, it can provide clues as to how male military personnel may also be influenced by Hollywood films as they form their own military identity. The post 9/11 era has seen numerous high budget films lionizing a particular type of soldier, the 'warrior-hero', who adheres to a traditional form of hegemonic masculinity and exhibits traits such as physical strength, bravery, stoicism, and an eagerness to fight. This paper examines how the portrayal of the 'warrior-hero' perpetuates a negative stereotype that soldiers are a blend of superheroes and emotionless robots and, therefore, inherently different from civilians. This paper examines the portrayal of militarized masculinity in three of the most successful war films of the post-9/11 era; Black Hawk Down (2001), The Hurt Locker (2008), and American Sniper (2014). The characters and experiences of the soldiers depicted in these films are contrasted with the lived experiences of soldiers during the Iraq and Afghanistan wars. Further, there is an analysis of popular films depicting ancient warriors, such as Troy (2004) and 300 (2007), which were released during the early years of the War on Terror. This paper draws on the concept of hegemonic militarised masculinity by leading scholars and feminist international relations theories on militarized masculinity. This paper uses veteran testimonies collected from a range of public sources, as well as previous studies on the link between traditional masculinity and war-related mental illness. This paper concludes that the seemingly exclusive portrayal of soldiers as 'warrior-heroes' in films in the post-9/11 era is misleading and damaging to civil-military relations and that the reality of the majority of soldiers' experiences is neglected in Hollywood films. As civilians often believe they are being shown true depictions of the US military in Hollywood films, especially in films that portray real events, it is important to find the differences between the idealized fictional 'warrior-heroes' and the reality of the soldiers on the ground in the War on Terror.

Keywords: civil-military relations, gender studies, militarized masculinity, social pyschology

Procedia PDF Downloads 126
197 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 297
196 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce

Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya

Abstract:

Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.

Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews

Procedia PDF Downloads 201
195 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 245
194 Of Digital Games and Dignity: Rationalizing E-Sports Amidst Stereotypes Associated with Gamers

Authors: Sarthak Mohapatra, Ajith Babu, Shyam Prasad Ghosh

Abstract:

The community of gamers has been at the crux of stigmatization and marginalization by the larger society, resulting in dignity erosion. India presents a unique context where e-sports have recently seen large-scale investments, a massive userbase, and appreciable demand for gaming as a career option. Yet the apprehension towards gaming is salient among parents and non-gamers who engage in the de-dignification of gamers, by advocating the discourse of violence promotion via video games. Even the government is relentless in banning games due to data privacy issues. Thus, the current study explores the experiences of gamers and how they navigate these de-dignifying circumstances. The study follows an exploratory qualitative approach where in-depth interviews are used as data collection tools guided by a semi-structured questionnaire. A total of 25 individuals were interviewed comprising casual gamers, professional gamers, and individuals who are indirectly impacted by gaming including parents, relatives, and friends of gamers. Thematic analysis via three-level coding is used to arrive at broad themes (categories) and their sub-themes. The results indicate that the de-dignification of gamers results from attaching stereotypes of introversion, aggression, low intelligence, and low aspirations to them. It is interesting to note that the intensity of de-dignification varies and is more salient in violent shooting games which are perceived to require low cognitive resources to master. The moral disengagement of gamers while playing violent video games becomes the basis for de-dignification. Findings reveal that circumventing de-dignification required gamers to engage in several tactics that included playing behind closed doors, consciously hiding the gamer identity, rationalizing behavior by idolizing professionals, bragging about achievements within the game, and so on. Theoretically, it contributes to dignity and social identity literature by focusing on stereotyping and stigmatization. From a policy perspective, improving legitimacy toward gaming is expected to improve the social standing of gamers and professionals. For practitioners, it is important that proper channels of promotion and communication are used to educate the non-gamers so that the stereotypes blur away.

Keywords: dignity, social identity, stereotyping, video games

Procedia PDF Downloads 103
193 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 396
192 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 226