Search results for: computing methodologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2029

Search results for: computing methodologies

889 Constructions of Teaching English as a Second Language Teacher Trainees’ Professional Identities

Authors: K. S. Kan

Abstract:

The main purpose of this paper is to deepen the current understanding of how a Teaching English as a Second Language (TESL) teacher trainee self is constructed. The present aim of Malaysian TESL teacher education is to train teacher trainees with established English Language Teaching methodologies of the four main language skills (listening, reading, writing and speaking) apart from building them up holistically. Therefore, it is crucial to learn more of the ways on how these teacher trainees construct their professional selves during their undergraduate years. The participants come from a class of 17 Semester 6 TESL students who had undergone a 3-month’s practicum practice during their fifth semester and going for their final 3 month’s practicum period from July 2018 onwards. Findings from a survey, interviews with the participants and lecturers, documentations such as the participants’ practicum record-books would be consolidated with the supervisory notes and comments. The findings suggest that these teacher trainees negotiate their identities and emotions that react with the socio-cultural factors. Periodical reflections on the teacher trainees’ practicum practices influence transformation.The findings will be further aligned to the courses that these teacher trainees have to take in order to equip them as future second language practitioners. It is hoped that the findings will be able to fill the gap from the teacher trainees’ perspectives on identity construction dealing. This study is much more significant now, in view of the new English Language Curriculum for Primary School (widely known as KSSR, its Malay acronym) which had been introduced and implemented in Malaysian primary schools recently. This research will benefit second language practitioners who is in the language education field, as well as, TESL undergraduates, on the knowledge of how teacher trainees respond to and negotiate their professional teaching identities as future second language educators.

Keywords: construction of selves, professional identities, second language, TEST teacher trainees

Procedia PDF Downloads 218
888 Aesthetic Preference and Consciousness in African Theatre: A Performance Appraisal of Tyrone Terrence's a Husband's Wife

Authors: Oluwatayo Isijola

Abstract:

The destructive influence of Europe on Africa has also taken a tow on the aesthetic essence of the African Art, which centres on morality and value for human life. In a parallel vein, the adverse turn of this influence on the dramaturgy of some contemporary African plays, poses impedance to audience consciousness in performance engagements. Through the spectrum of African Aesthetics, this study attempts a performance appraisal of A Husband’s wife; an unpublished play written by Tyrone Terence for the African audience. The researcher proffers two variant textual interpretations of the play to evaluate performance engagement in its default realistic mode, which holds an unresolved 'Medean-impulse', and another wherein the resolution is treated to a paradigm shift for aesthetic preference. The investigation employs the mixed method, which combines the quantitative and qualitative methodologies. Keen observation on the reactions and responses of audience members that were engaged in both performances, and on-the-spot interview with selected audience members, were the primary sources for the qualitative data. However, quantitative data was captured in an on-the-spot survey with the instrument of the questionnaire served to a sample population of the audience. The study observes that the preference for African aesthetics as exemplified in the second performance which deployed a paradigm shift did enhance audience consciousness. Hinging on performance aesthetic theory, the paper recommends that all such African plays bestowed with the shortcoming of African aesthetics, should be appropriately treated to paradigm shifts for performance engagement, in the interest of enhancing audience consciousness in the Nigerian Theatre.

Keywords: African aesthetics, audience consciousness, paradigm shift, median-impulse

Procedia PDF Downloads 317
887 Neutron Irradiated Austenitic Stainless Steels: An Applied Methodology for Nanoindentation and Transmission Electron Microscopy Studies

Authors: P. Bublíkova, P. Halodova, H. K. Namburi, J. Stodolna, J. Duchon, O. Libera

Abstract:

Neutron radiation-induced microstructural changes cause degradation of mechanical properties and the lifetime reduction of reactor internals during nuclear power plant operation. Investigating the effects of neutron irradiation on mechanical properties of the irradiated material (hardening, embrittlement) is challenging and time-consuming. Although the fast neutron spectrum has the major influence on microstructural properties, the thermal neutron effect is widely investigated owing to Irradiation-Assisted Stress Corrosion Cracking firstly observed in BWR stainless steels. In this study, 300-series austenitic stainless steels used as material for NPP's internals were examined after neutron irradiation at ~ 15 dpa. Although several nanoindentation experimental publications are available to determine the mechanical properties of ion irradiated materials, less is available on neutron irradiated materials at high dpa tested in hot-cells. In this work, we present particular methodology developed to determine the mechanical properties of neutron irradiated steels by nanoindentation technique. Furthermore, radiation-induced damage in the specimens was investigated by High Resolution - Transmission Electron Microscopy (HR-TEM) that showed the defect features, particularly Frank loops, cavity microstructure, radiation-induced precipitates and radiation-induced segregation. The results of nanoindentation measurements and associated nanoscale defect features showed the effect of irradiation-induced hardening. We also propose methodologies to optimized sample preparation for nanoindentation and microscotructural studies.

Keywords: nanoindentation, thermal neutrons, radiation hardening, transmission electron microscopy

Procedia PDF Downloads 150
886 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications

Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu

Abstract:

On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.

Keywords: cloud computing, CPU intensive applications, resource optimization, strategy

Procedia PDF Downloads 271
885 A Genre Analysis of University Lectures

Authors: Lee Kok Yueh, Fatin Hamadah Rahman, David Hassell, Au Thien Wan

Abstract:

This work reports on a genre based study of lectures at a University in Brunei, Universiti Teknologi Brunei to explore the communicative functions and to gain insight into the discourse. It explores these in three different domains; Social Science, Engineering and Computing. Audio recordings from four lecturers comprising 20 lectures were transcribed and analysed, with the duration of each lecture varying between 20 to 90 minutes. This qualitative study found similar patterns and functions of lectures as those found in existing research amongst which include greetings, housekeeping, or recapping of previous lectures in the lecture introductions. In the lecture content, comprehension check and use of examples or analogies are very prevalent. However, the use of examples largely depend on the lecture content; and the more technical the content, the harder it was for lecturers to provide examples or analogies. Three functional moves are identified in the lecture conclusions; announcement, summary and future plan, all of which are optional. Despite the relatively small sample size, the present study shows that lectures are interactive and there are some consistencies with the delivery of lecture in relation to the communicative functions and genre of lecture.

Keywords: communicative functions, genre analysis, higher education, lectures

Procedia PDF Downloads 183
884 Design and Development of Data Mining Application for Medical Centers in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

Data Mining is the extraction of information from a large database which helps in predicting a trend or behavior, thereby helping management make knowledge-driven decisions. One principal problem of most hospitals in rural areas is making use of the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved; this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method, which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to easily retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: data mining, medical record system, systems programming, computing

Procedia PDF Downloads 197
883 Scientific Development as Diffusion on a Social Network: An Empirical Case Study

Authors: Anna Keuchenius

Abstract:

Broadly speaking, scientific development is studied in either a qualitative manner with a focus on the behavior and interpretations of academics, such as the sociology of science and science studies or in a quantitative manner with a focus on the analysis of publications, such as scientometrics and bibliometrics. Both come with a different set of methodologies and few cross-references. This paper contributes to the bridging of this divide, by on the on hand approaching the process of scientific progress from a qualitative sociological angle and using on the other hand quantitative and computational techniques. As a case study, we analyze the diffusion of Granovetter's hypothesis from his 1973 paper 'On The Strength of Weak Ties.' A network is constructed of all scientists that have referenced this particular paper, with directed edges to all other researchers that are concurrently referenced with Granovetter's 1973 paper. Studying the structure and growth of this network over time, it is found that Granovetter's hypothesis is used by distinct communities of scientists, each with their own key-narrative into which the hypothesis is fit. The diffusion within the communities shares similarities with the diffusion of an innovation in which innovators, early adopters, and an early-late majority can clearly be distinguished. Furthermore, the network structure shows that each community is clustered around one or few hub scientists that are disproportionately often referenced and seem largely responsible for carrying the hypothesis into their scientific subfield. The larger implication of this case study is that the diffusion of scientific hypotheses and ideas are not the spreading of well-defined objects over a network. Rather, the diffusion is a process in which the object itself dynamically changes in concurrence with its spread. Therefore it is argued that the methodology presented in this paper has potential beyond the scientific domain, in the study of diffusion of other not well-defined objects, such as opinions, behavior, and ideas.

Keywords: diffusion of innovations, network analysis, scientific development, sociology of science

Procedia PDF Downloads 298
882 Flexible Cities: A Multisided Spatial Application of Tracking Livability of Urban Environment

Authors: Maria Christofi, George Plastiras, Rafaella Elia, Vaggelis Tsiourtis, Theocharis Theocharides, Miltiadis Katsaros

Abstract:

The rapidly expanding urban areas of the world constitute a challenge of how we need to make the transition to "the next urbanization", which will be defined by new analytical tools and new sources of data. This paper is about the production of a spatial application, the ‘FUMapp’, where space and its initiative will be available literally, in meters, but also abstractly, at a sensed level. While existing spatial applications typically focus on illustrations of the urban infrastructure, the suggested application goes beyond the existing: It investigates how our environment's perception adapts to the alterations of the built environment through a dataset construction of biophysical measurements (eye-tracking, heart beating), and physical metrics (spatial characteristics, size of stimuli, rhythm of mobility). It explores the intersections between architecture, cognition, and computing where future design can be improved and identifies the flexibility and livability of the ‘available space’ of specific examined urban paths.

Keywords: biophysical data, flexibility of urban, livability, next urbanization, spatial application

Procedia PDF Downloads 134
881 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification

Procedia PDF Downloads 337
880 The Applicability of Just Satisfaction in Inter-State Cases: A Case Study of Cyprus versus Turkey

Authors: Congrui Chen

Abstract:

The European Court of Human Rights (hereinafter ECtHR) delivered its judgment of just satisfaction on the case of Cyprus v. Turkey, ordering a lump sum of 9,000,000 euros as the just compensation. It is the first time that the ECtHR applied the Article 41 of just compensation in an inter-state case, and it stands as the highest amount of just compensation awarded in the history of the ECtHR. The Cyprus v. Turkey case, which represents the most crucial contribution to European peace in the history of the court. This thesis uses the methodologies of textual research, comparison analysis, and case law study to go further on the following two questions specifically:(i) whether the just compensation is applicable in an inter-state case; (ii) whether such just compensation is of punitive nature. From the point of view of general international law, the essence of the case is the state's responsibility for the violation of individual rights. In other words, the state takes a similar diplomatic protection approach to seek relief. In the course of the development of international law today, especially with the development of international human rights law, States that have a duty to protect human rights should bear corresponding responsibilities for their violations of international human rights law. Under the specific system of the European Court of Human Rights, the just compensation for article 41 is one of the specific ways of assuming responsibility. At the regulatory level, the European Court of Human Rights makes it clear that the just satisfaction of article 41 of the Convention does not include punitive damages, as it relates to the issue of national sovereignty. Nevertheless, it is undeniable that the relief to the victim and the punishment to the responsible State are two closely integrated aspects of responsibility. In other words, compensatory compensation has inherent "punitive".

Keywords: European Court of Human Right, inter-state cases, just satisfaction, punitive damages

Procedia PDF Downloads 262
879 Reflecting Socio-Political Needs in Education Policy-Making: An Exploratory Study of Vietnam's Key Education Reforms (1945-2017)

Authors: Linh Tong

Abstract:

This paper aims to contribute to the understanding of key education reforms in Vietnam from 1945 to 2017, which reflects an evolution of socio-political needs of the Socialist Republic of Vietnam throughout this period. It explores the contextual conditions, motivations and ambitions influencing the formation of the education reforms in Vietnam. It also looks, from an applied practical perspective, at the influence of politics on education policy-making. The research methodology includes a content analysis of curriculum designs proposed by the Ministry of Education and Training, relevant resolutions and executive orders passed by the National Assembly and the Prime Minister, as well as interviews with experts and key stakeholders. The results point to a particular configuration of factors which have been inspiring the shape and substance of these reforms and which have most certainly influenced their implementation. This configuration evolves from the immediate needs to erase illiteracy and cultivate socialist economic model at the beginning of Vietnam’s independence in 1945-1975, to a renewed urge to adopt market-oriented economy in 1986 and cautiously communicate with the outside world until 2000s, and to currently a demonstrated desire to fully integrate into the global economy and tackle with rising concerns about national security (the South China Sea Dispute), environmental sustainability, construction of a knowledge economy, and a rule-of-law society. Overall, the paper attempts to map Vietnam’s socio-political needs with the changing sets of goals and expected outcomes in teaching and learning methodologies and practices as introduced in Vietnamese key education reforms.

Keywords: curriculum development, knowledge society, national security, politics of education policy-making, Vietnam's education reforms

Procedia PDF Downloads 142
878 Cars in a Neighborhood: A Case of Sustainable Living in Sector 22 Chandigarh

Authors: Maninder Singh

Abstract:

The Chandigarh city is under the strain of exponential growth of car density across various neighborhood. The consumerist nature of society today is to be blamed for this menace because everyone wants to own and ride a car. Car manufacturers are busy selling two or more cars per household. The Regional Transport Offices are busy issuing as many licenses to new vehicles as they can in order to generate revenue in the form of Road Tax. The car traffic in the neighborhoods of Chandigarh has reached a tipping point. There needs to be a more empirical and sustainable model of cars per household, which should be based on specific parameters of livable neighborhoods. Sector 22 in Chandigarh is one of the first residential sectors to be established in the city. There is scope to think, reflect, and work out a method to know how many cars we need to sell our citizens before we lose the argument to traffic problems, parking problems, and road rage. This is where the true challenge of a planner or a designer of the city lies. Currently, in Chandigarh city, there are no clear visible answers to this problem. The way forward is to look at spatial mapping, planning, and design of car parking units to address the problem, rather than suggesting extreme measures of banning cars (short-term) or promoting plans for citywide transport (very long-term). This is a chance to resolve the problem with a pragmatic approach from a citizen’s perspective, instead of an orthodox development planner’s methodology. Since citizens are at the center of how the problem is to be addressed, acceptable solutions are more likely to emerge from the car and traffic problem as defined by the citizens. Thus, the idea and its implementation would be interesting in comparison to the known academic methodologies. The novel and innovative process would lead to a more acceptable and sustainable approach to the issue of number of car parks in the neighborhood of Chandigarh city.

Keywords: cars, Chandigarh, neighborhood, sustainable living, walkability

Procedia PDF Downloads 143
877 Research on the Aero-Heating Prediction Based on Hybrid Meshes and Hybrid Schemes

Authors: Qiming Zhang, Youda Ye, Qinxue Jiang

Abstract:

Accurate prediction of external flowfield and aero-heating at the wall of hypersonic vehicle is very crucial for the design of aircrafts. Unstructured/hybrid meshes have more powerful advantages than structured meshes in terms of pre-processing, parallel computing and mesh adaptation, so it is imperative to develop high-resolution numerical methods for the calculation of aerothermal environment on unstructured/hybrid meshes. The inviscid flux scheme is one of the most important factors affecting the accuracy of unstructured/ hybrid mesh heat flux calculation. Here, a new hybrid flux scheme is developed and the approach of interface type selection is proposed: i.e. 1) using the exact Riemann scheme solution to calculate the flux on the faces parallel to the wall; 2) employing Sterger-Warming (S-W) scheme to improve the stability of the numerical scheme in other interfaces. The results of the heat flux fit the one observed experimentally and have little dependence on grids, which show great application prospect in unstructured/ hybrid mesh.

Keywords: aero-heating prediction, computational fluid dynamics, hybrid meshes, hybrid schemes

Procedia PDF Downloads 230
876 Distributed Manufacturing (DM)- Smart Units and Collaborative Processes

Authors: Hermann Kuehnle

Abstract:

Developments in ICT totally reshape manufacturing as machines, objects and equipment on the shop floors will be smart and online. Interactions with virtualizations and models of a manufacturing unit will appear exactly as interactions with the unit itself. These virtualizations may be driven by providers with novel ICT services on demand that might jeopardize even well established business models. Context aware equipment, autonomous orders, scalable machine capacity or networkable manufacturing unit will be the terminology to get familiar with in manufacturing and manufacturing management. Such newly appearing smart abilities with impact on network behavior, collaboration procedures and human resource development will make distributed manufacturing a preferred model to produce. Computing miniaturization and smart devices revolutionize manufacturing set ups, as virtualizations and atomization of resources unwrap novel manufacturing principles. Processes and resources obey novel specific laws and have strategic impact on manufacturing and major operational implications. Mechanisms from distributed manufacturing engaging interacting smart manufacturing units and decentralized planning and decision procedures already demonstrate important effects from this shift of focus towards collaboration and interoperability.

Keywords: autonomous unit, networkability, smart manufacturing unit, virtualization

Procedia PDF Downloads 518
875 Scalable Cloud-Based LEO Satellite Constellation Simulator

Authors: Karim Sobh, Khaled El-Ayat, Fady Morcos, Amr El-Kadi

Abstract:

Distributed applications deployed on LEO satellites and ground stations require substantial communication between different members in a constellation to overcome the earth coverage barriers imposed by GEOs. Applications running on LEO constellations suffer the earth line-of-sight blockage effect. They need adequate lab testing before launching to space. We propose a scalable cloud-based net-work simulation framework to simulate problems created by the earth line-of-sight blockage. The framework utilized cloud IaaS virtual machines to simulate LEO satellites and ground stations distributed software. A factorial ANOVA statistical analysis is conducted to measure simulator overhead on overall communication performance. The results showed a very low simulator communication overhead. Consequently, the simulation framework is proposed as a candidate for testing LEO constellations with distributed software in the lab before space launch.

Keywords: LEO, cloud computing, constellation, satellite, network simulation, netfilter

Procedia PDF Downloads 376
874 Relationships of Driver Drowsiness and Sleep-Disordered Breathing Syndrome

Authors: Cheng-Yu Tsai, Wen-Te Liu, Yin-Tzu Lin, Chen-Chen Lo, Kang Lo

Abstract:

Background: Driving drowsiness related to inadequate or disordered sleep accounts for a major percentage of traffic accidents. Sleep-disordered breathing (SDB) syndrome is a common respiratory disorder during sleep. However, the effects of SDB syndrome on driving fatigue remain unclear. Objective: This study aims to investigate the relationship between SDB pattern and driving drowsiness. Methodologies: The physical condition while driving was obtained from the questionnaires to classify the state of driving fatigue. SDB syndrome was quantified as the polysomnography, and the air flow pattern was collected by the thermistor and nasal pressure cannula. To evaluate the desaturation, the mean hourly number of greater than 3% dips in oxygen saturation was sentenced by reregistered technologist during examination in a hospital in New Taipei City (Taiwan). The independent T-test was used to investigate the correlations between sleep disorders related index and driving drowsiness. Results: There were 880 subjects recruited in this study, who had been done polysomnography for evaluating severity for obstructive sleep apnea syndrome (OSAS) as well as completed the driver condition questionnaire. Four-hundred-eighty-four subjects (55%) were classified as fatigue group, and 396 subjects (45%) were served as the control group. Significantly higher values of snoring index (242.14 ± 205.51 /hours) were observed in the fatigue group (p < 0.01). The value of respiratory disturbance index (RDI) (31.82 ± 19.34 /hours) in fatigue group were significantly higher than the control group (p < 0.01). Conclusion: We observe the considerable association between SDB syndrome and driving drowsiness. To promote traffic safety, SDB syndrome should be controlled and alleviated.

Keywords: driving drowsiness, sleep-disordered breathing syndrome, snoring index, respiratory disturbance index.

Procedia PDF Downloads 131
873 Application of the Concept of Comonotonicity in Option Pricing

Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke

Abstract:

Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.

Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing

Procedia PDF Downloads 504
872 Knowledge Based Liability for ISPs’ Copyright and Trademark Infringement in the EU E-Commerce Directive: Two Steps Behind the Philosophy of Computing Mind

Authors: Mohammad Sadeghi

Abstract:

The subject matter of this article is the efficiency of current knowledge standard to afford the legal integration regarding criteria and approaches to ISP knowledge standards, to shield ISP and copyright, trademark and other parties’ rights in the online information society. The EU recognizes the knowledge-based liability for intermediaries in the European Directive on Electronic Commerce, but the implication of all parties’ responsibility for combating infringement has been immolated by dominating attention on liability due to the lack of the appropriate legal mechanism to devote each party responsibility. Moreover, there is legal challenge on the applicability of knowledge-based liability on hosting services and information location tools service. The aim of this contribution is to discuss the advantages and disadvantages of ECD knowledge standard through case law with a special emphasis on duty of prevention and constructive knowledge role on internet service providers (ISP s’) to achieve fair balance between all parties rights.

Keywords: internet service providers, liability, copyright infringement, hosting, caching, mere conduit service, notice and takedown, E-commerce Directive

Procedia PDF Downloads 514
871 Adsorption of Heavy Metals Using Chemically-Modified Tea Leaves

Authors: Phillip Ahn, Bryan Kim

Abstract:

Copper is perhaps the most prevalent heavy metal used in the manufacturing industries, from food additives to metal-mechanic factories. Common methodologies to remove copper are expensive and produce undesired by-products. A good decontaminating candidate should be environment-friendly, inexpensive, and capable of eliminating low concentrations of the metal. This work suggests chemically modified spent tea leaves of chamomile, peppermint and green tea in their thiolated, sulfonated and carboxylated forms as candidates for the removal of copper from solutions. Batch experiments were conducted to maximize the adsorption of copper (II) ions. Effects such as acidity, salinity, adsorbent dose, metal concentration, and presence of surfactant were explored. Experimental data show that maximum adsorption is reached at neutral pH. The results indicate that Cu(II) can be removed up to 53%, 22% and 19% with the thiolated, carboxylated and sulfonated adsorbents, respectively. Maximum adsorption of copper on TPM (53%) is achieved with 150 mg and decreases with the presence of salts and surfactants. Conversely, sulfonated and carboxylated adsorbents show better adsorption in the presence of surfactants. Time-dependent experiments show that adsorption is reached in less than 25 min for TCM and 5 min for SCM. Instrumental analyses determined the presence of active functional groups, thermal resistance, and scanning electron microscopy, indicating that both adsorbents are promising materials for the selective recovery and treatment of metal ions from wastewaters. Finally, columns were prepared with these adsorbents to explore their application in scaled-up processes, with very positive results. A long-term goal involves the recycling of the exhausted adsorbent and/or their use in the preparation of biofuels due to changes in materials’ structures.

Keywords: heavy metal removal, adsorption, wastewaters, water remediation

Procedia PDF Downloads 282
870 An Integrated Web-Based Workflow System for Design of Computational Pipelines in the Cloud

Authors: Shuen-Tai Wang, Yu-Ching Lin

Abstract:

With more and more workflow systems adopting cloud as their execution environment, it presents various challenges that need to be addressed in order to be utilized efficiently. This paper introduces a method for resource provisioning based on our previous research of dynamic allocation and its pipeline processes. We present an abstraction for workload scheduling in which independent tasks get scheduled among various available processors of distributed computing for optimization. We also propose an integrated web-based workflow designer by taking advantage of the HTML5 technology and chaining together multiple tools. In order to make the combination of multiple pipelines executing on the cloud in parallel, we develop a script translator and an execution engine for workflow management in the cloud. All information is known in advance by the workflow engine and tasks are allocated according to the prior knowledge in the repository. This proposed effort has the potential to provide support for process definition, workflow enactment and monitoring of workflow processes. Users would benefit from the web-based system that allows creation and execution of pipelines without scripting knowledge.

Keywords: workflow systems, resources provisioning, workload scheduling, web-based, workflow engine

Procedia PDF Downloads 148
869 House Facades and Emotions: Exploring the Psychological Impact of Architectural Features

Authors: Nour Tawil, Sandra Weber, Kirsten K. Roessler, Martin Mau, Simone Kuhn

Abstract:

The link between “quality” residential environments and human health and well-being has long been proposed. While the physical properties of a sound environment have been fairly defined, little focus has been given to the psychological impact of architectural elements. Recently, studies have investigated the response to architectural parameters, using measures of physiology, brain activity, and emotion. Results showed different aspects of interest: detailed and open versus blank and closed facades, patterns in perceiving different elements, and a visual bias for capturing faces in buildings. However, in the absence of a consensus on methodologies, the available studies remain unsystematic and face many limitations regarding the underpinning psychological mechanisms. To bridge some of these gaps, an online study was launched to investigate design features that influence the aesthetic judgement and emotional evaluation of house facades, using a well-controlled stimulus set of Canadian houses. A methodical modelling of design features will be performed to extract both high and low level image properties, in addition to segmentation of layout-related features. 300 participants from Canada, Denmark, and Germany will rate the images on twelve psychological dimensions representing appealing aspects of a house. Subjective ratings are expected to correlate with specific architectural elements while controlling for typicality and familiarity, and other individual differences. With the lack of relevant studies, this research aims to identify architectural elements of beneficial qualities that can inform design strategies for optimized residential spaces.

Keywords: architectural elements, emotions, psychological response, residential facades.

Procedia PDF Downloads 220
868 The Influence of Service Quality on Customer Satisfaction and Customer Loyalty at a Telecommunication Company in Malaysia

Authors: Noor Azlina Mohamed Yunus, Baharom Abd Rahman, Abdul Kadir Othman, Narehan Hassan, Rohana Mat Som, Ibhrahim Zakaria

Abstract:

Customer satisfaction and customer loyalty are the most important outcomes of marketing in which both elements serve various stages of consumer buying behavior. Excellent service quality has become a major corporate goal as more companies gradually struggle for quality for their products and services. Therefore, the main purpose of this study is to investigate the influence of service quality on customer satisfaction and customer loyalty at one telecommunication company in Malaysia which is Telekom Malaysia. The scope of this research is to evaluate satisfaction on the products or services at TMpoint Bukit Raja, Malaysia. The data are gathered through the distribution of questionnaires to a total of 306 respondents who visited and used the products or services. By using correlation and multiple regression analyses, the result revealed that there was a positive and significant relationship between service quality and customer satisfaction. The most influential factor on customer satisfaction was empathy followed by reliability, assurance and tangibles. However, there was no significant influence between responsiveness and customer satisfaction. The result also showed there was a positive and significant relationship between service quality and customer loyalty. The most influential factor on customer loyalty was assurance followed by reliability and tangibles. TMpoint Bukit Raja is recommended to device excellent strategies to satisfy customers’ needs and to adopt action-oriented approach by focusing on what the customers wanted. It is also recommended that similar study can be carried out in other industries using different methodologies such as longitudinal method, enlarge the sample size and use a qualitative approach.

Keywords: customer satisfaction, customer loyalty, service quality, telecommunication company

Procedia PDF Downloads 446
867 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications

Authors: Aymen Laadhari

Abstract:

The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.

Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell

Procedia PDF Downloads 241
866 Positive Energy Districts in the Swedish Energy System

Authors: Vartan Ahrens Kayayan, Mattias Gustafsson, Erik Dotzauer

Abstract:

The European Union is introducing the positive energy district concept, which has the goal to reduce overall carbon dioxide emissions. Other studies have already mapped the make-up of such districts, and reviewed their definitions and where they are positioned. The Swedish energy system is unique compared to others in Europe, due to the implementation of low-carbon electricity and heat energy sources and high uptake of district heating. The goal for this paper is to start the discussion about how the concept of positive energy districts can best be applied to the Swedish context and meet their mitigation goals. To explore how these differences impact the formation of positive energy districts, two cases were analyzed for their methods and how these integrate into the Swedish energy system: a district in Uppsala with a focus on energy and another in Helsingborg with a focus on climate. The case in Uppsala uses primary energy calculations which can be critisied but take a virtual border that allows for its surrounding system to be considered. The district in Helsingborg has a complex methodology for considering the life cycle emissions of the neighborhood. It is successful in considering the energy balance on a monthly basis, but it can be problematized in terms of creating sub-optimized systems due to setting tight geographical constraints. The discussion of shaping the definitions and methodologies for positive energy districts is taking place in Europe and Sweden. We identify three pitfalls that must be avoided so that positive energy districts meet their mitigation goals in the Swedish context. The goal of pushing out fossil fuels is not relevant in the current energy system, the mismatch between summer electricity production and winter energy demands should be addressed, and further implementations should consider collaboration with the established district heating grid.

Keywords: positive energy districts, energy system, renewable energy, European Union

Procedia PDF Downloads 70
865 Earthquake Resistant Sustainable Steel Green Building

Authors: Arup Saha Chaudhuri

Abstract:

Structural steel is a very ductile material with high strength carrying capacity, thus it is very useful to make earthquake resistant buildings. It is a homogeneous material also. The member section and the structural system can be made very efficient for economical design. As the steel is recyclable and reused, it is a green material. The embodied energy for the efficiently designed steel structure is less than the RC structure. For sustainable green building steel is the best material nowadays. Moreover, pre-engineered and pre-fabricated faster construction methodologies help the development work to complete within the stipulated time. In this paper, the usefulness of Eccentric Bracing Frame (EBF) in steel structure over Moment Resisting Frame (MRF) and Concentric Bracing Frame (CBF) is shown. Stability of the steel structures against horizontal forces especially in seismic condition is efficiently possible by Eccentric bracing systems with economic connection details. The EBF is pin–ended, but the beam-column joints are designed for pin ended or for full connectivity. The EBF has several desirable features for seismic resistance. In comparison with CBF system, EBF system can be designed for appropriate stiffness and drift control. The link beam is supposed to yield in shear or flexure before initiation of yielding or buckling of the bracing member in tension or compression. The behavior of a 2-D steel frame is observed under seismic loading condition in the present paper. Ductility and brittleness of the frames are compared with respect to time period of vibration and dynamic base shear. It is observed that the EBF system is better than MRF system comparing the time period of vibration and base shear participation.

Keywords: steel building, green and sustainable, earthquake resistant, EBF system

Procedia PDF Downloads 343
864 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning

Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah

Abstract:

Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.

Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning

Procedia PDF Downloads 12
863 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 30
862 Digital Twin Smart Hospital: A Guide for Implementation and Improvements

Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar

Abstract:

This study investigates the application of Digital Twins (DT) in Smart Hospital Environments (SHE), through a bibliometric study and literature review, including comparison with the principles of Industry 4.0. It aims to analyze the current state of the implementation of digital twins in clinical and non-clinical operations in healthcare settings, identifying trends and challenges, comparing these practices with Industry 4.0 concepts and technologies, in order to present a basic framework including stages and maturity levels. The bibliometric methodology will allow mapping the existing scientific production on the theme, while the literature review will synthesize and critically analyze the relevant studies, highlighting pertinent methodologies and results, additionally the comparison with Industry 4.0 will provide insights on how the principles of automation, interconnectivity and digitalization can be applied in healthcare environments/operations, aiming at improvements in operational efficiency and quality of care. The results of this study will contribute to a deeper understanding of the potential of Digital Twins in Smart Hospitals, in addition to the future potential from the effective integration of Industry 4.0 concepts in this specific environment, presented through the practical framework, after all, the urgent need for changes addressed in this article is undeniable, as well as all their value contribution to human sustainability, designed in SDG3 – Health and well-being: ensuring that all citizens have a healthy life and well-being, at all ages and in all situations. We know that the validity of these relationships will be constantly discussed, and technology can always change the rules of the game.

Keywords: digital twin, smart hospital, healthcare operations, industry 4.0, SDG3, technology

Procedia PDF Downloads 42
861 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 81
860 Enzymatic Repair Prior To DNA Barcoding, Aspirations, and Restraints

Authors: Maxime Merheb, Rachel Matar

Abstract:

Retrieving ancient DNA sequences which in return permit the entire genome sequencing from fossils have extraordinarily improved in recent years, thanks to sequencing technology and other methodological advances. In any case, the quest to search for ancient DNA is still obstructed by the damage inflicted on DNA which accumulates after the death of a living organism. We can characterize this damage into three main categories: (i) Physical abnormalities such as strand breaks which lead to the presence of short DNA fragments. (ii) Modified bases (mainly cytosine deamination) which cause errors in the sequence due to an incorporation of a false nucleotide during DNA amplification. (iii) DNA modifications referred to as blocking lesions, will halt the PCR extension which in return will also affect the amplification and sequencing process. We can clearly see that the issues arising from breakage and coding errors were significantly decreased in recent years. Fast sequencing of short DNA fragments was empowered by platforms for high-throughput sequencing, most of the coding errors were uncovered to be the consequences of cytosine deamination which can be easily removed from the DNA using enzymatic treatment. The methodology to repair DNA sequences is still in development, it can be basically explained by the process of reintroducing cytosine rather than uracil. This technique is thus restricted to amplified DNA molecules. To eliminate any type of damage (particularly those that block PCR) is a process still pending the complete repair methodologies; DNA detection right after extraction is highly needed. Before using any resources into extensive, unreasonable and uncertain repair techniques, it is vital to distinguish between two possible hypotheses; (i) DNA is none existent to be amplified to begin with therefore completely un-repairable, (ii) the DNA is refractory to PCR and it is worth to be repaired and amplified. Hence, it is extremely important to develop a non-enzymatic technique to detect the most degraded DNA.

Keywords: ancient DNA, DNA barcodong, enzymatic repair, PCR

Procedia PDF Downloads 394