Search results for: hybrid learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8740

Search results for: hybrid learning

3040 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 240
3039 Locket Application

Authors: Farah Al-Fityani, Aljohara Alsowail, Shatha Bindawood, Heba Balrbeah

Abstract:

Locket is a popular app that lets users share spontaneous photos with a close circle of friends. The app offers a unique way to stay connected with loved ones by allowing users to see glimpses of their day through photos displayed on a widget on their home screen. This summary outlines the process of developing an app like Locket, highlighting the importance of user privacy and security. It also details the findings of a study on user engagement with the Locket app, revealing positive sentiment towards its features and concept but also identifying areas for improvement. Overall, the summary portrays Locket as a successful app that is changing the way people connect on social media.

Keywords: locket, app, machine learning, connect

Procedia PDF Downloads 46
3038 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 302
3037 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 75
3036 Development of New Localized Surface Plasmon Resonance Interfaces Based on ITO Au NPs/ Polymer for Nickel Detection

Authors: F. Z. Tighilt, N. Belhaneche-Bensemra, S. Belhousse, S. Sam, K. Lasmi, N. Gabouze

Abstract:

Recently, the gold nanoparticles (Au NPs) became an active multidisciplinary research topic. First, Au thin films fabricated by alkylthiol-functionalized Au NPs were found to have vapor sensitive conductivities, they were hence widely investigated as electrical chemiresistors for sensing different vapor analytes and even organic molecules in aqueous solutions. Second, Au thin films were demonstrated to have speciallocalized surface plasmon resonances (LSPR), so that highly ordered 2D Au superlattices showed strong collective LSPR bands due to the near-field coupling of adjacent nanoparticles and were employed to detect biomolecular binding. Particularly when alkylthiol ligands were replaced by thiol-terminated polymers, the resulting polymer-modified Au NPs could be readily assembled into 2D nanostructures on solid substrates. Monolayers of polystyrene-coated Au NPs showed typical dipolar near-field interparticle plasmon coupling of LSPR. Such polymer-modified Au nanoparticle films have an advantage that the polymer thickness can be feasibly controlled by changing the polymer molecular weight. In this article, the effect of tin-doped indium oxide (ITO) coatings on the plasmonic properties of ITO interfaces modified with gold nanostructures (Au NSs) is investigated. The interest in developing ITO overlayers is multiple. The presence of a con-ducting ITO overlayer creates a LSPR-active interface, which can serve simultaneously as a working electrode in an electro-chemical setup. The surface of ITO/ Au NPs contains hydroxyl groups that can be used to link functional groups to the interface. Here the covalent linking of nickel /Au NSs/ITO hybrid LSPR platforms will be presented.

Keywords: conducting polymer, metal nanoparticles (NPs), LSPR, poly (3-(pyrrolyl)–carboxylic acid), polypyrrole

Procedia PDF Downloads 268
3035 3D Receiver Operator Characteristic Histogram

Authors: Xiaoli Zhang, Xiongfei Li, Yuncong Feng

Abstract:

ROC curves, as a widely used evaluating tool in machine learning field, are the tradeoff of true positive rate and negative rate. However, they are blamed for ignoring some vital information in the evaluation process, such as the amount of information about the target that each instance carries, predicted score given by each classification model to each instance. Hence, in this paper, a new classification performance method is proposed by extending the Receiver Operator Characteristic (ROC) curves to 3D space, which is denoted as 3D ROC Histogram. In the histogram, the

Keywords: classification, performance evaluation, receiver operating characteristic histogram, hardness prediction

Procedia PDF Downloads 314
3034 Impact of Covid-19 on Digital Transformation

Authors: Tebogo Sethibe, Jabulile Mabuza

Abstract:

The COVID-19 pandemic has been commonly referred to as a ‘black swan event’; it has changed the world, from how people live, learn, work and socialise. It is believed that the pandemic has fast-tracked the adoption of technology in many organisations to ensure business continuity and business sustainability; broadly said, the pandemic has fast-tracked digital transformation (DT) in different organisations. This paper aims to study the impact of the COVID-19 pandemic on DT in organisations in South Africa by focusing on the changes in IT capabilities in the DT framework. The research design is qualitative. The data collection was through semi-structured interviews with information communication technology (ICT) leaders representing different organisations in South Africa. The data were analysed using the thematic analysis process. The results from the study show that, in terms of ICT in the organisation, the pandemic had a direct and positive impact on ICT strategy and ICT operations. In terms of IT capability transformation, the pandemic resulted in the optimisation and expansion of existing IT capabilities in the organisation and the building of new IT capabilities to meet emerging business needs. In terms of the focus of activities during the pandemic, there seems to be a split in organisations between the primary focus being on ‘digital IT’ or ‘traditional IT’. Overall, the findings of the study show that the pandemic had a positive and significant impact on DT in organisations. However, a definitive conclusion on this would require expanding the scope of the research to all the components of a comprehensive DT framework. This study is significant because it is one of the first studies to investigate the impact of the COVID-19 pandemic on organisations, on ICT in the organisation, on IT capability transformation and, to a greater extent, DT. The findings from the study show that in response to the pandemic, there is a need for: (i) agility in organisations; (ii) organisations to execute on their existing strategy; (iii) the future-proofing of IT capabilities; (iv) the adoption of a hybrid working model; and for (v) organisations to take risks and embrace new ideas.

Keywords: digital transformation, COVID-19, bimodal-IT, digital transformation framework

Procedia PDF Downloads 178
3033 Reading Literacy, Storytelling and Cognitive Learning: an Effective Connection in Sustainability Education

Authors: Rosa Tiziana Bruno

Abstract:

The connection between education and sustainability has been posited to have benefit for realizing a social development compatible with environmental protection. However, an educational paradigm based on the passage of information or on the fear of a catastrophe might not favor the acquisition of eco-identity. To build a sustainable world, it is necessary to "become people" in harmony with other human beings, being aware of belonging to the same human community that is part of the natural world. This can only be achieved within an authentic educating community and the most effective tools for building educating communities are reading literacy and storytelling. This paper is the report of a research-action carried out in this direction, in agreement with the sociology department of the University of Salerno, which involved four hundred children and their teachers in a path based on the combination of reading literacy, storytelling, autobiographical writing and outdoor education. The goal of the research was to create an authentic educational community within the school, capable to encourage the acquisition of an eco-identity by the pupils, that is, personal and relational growth in the full realization of the Self, in harmony with the social and natural environment, with a view to an authentic education for sustainability. To ensure reasonable validity and reliability of findings, the inquiry started with participant observation and a process of triangulation has been used including: semi-structured interview, socio-semiotic analysis of the conversation and time budget. Basically, a multiple independent sources of data was used to answer the questions. Observing the phenomenon through multiple "windows" helped to comparing data through a variety of lenses. All teachers had the experience of implementing a socio-didactic strategy called "Fiabadiario" and they had the possibility to use it with approaches that fit their students. The data being collected come from the very students and teachers who are engaged with this strategy. The educational path tested during the research has produced sustainable relationships and conflict resolution within the school system and between school and families, creating an authentic and sustainable learning community.

Keywords: educating community, education for sustainability, literature in education, social relations

Procedia PDF Downloads 122
3032 Effects of Intracerebroventricular Injection of Ghrelin and Aerobic Exercise on Passive Avoidance Memory and Anxiety in Adult Male Wistar Rats

Authors: Mohaya Farzin, Parvin Babaei, Mohammad Rostampour

Abstract:

Ghrelin plays a considerable role in important neurological effects related to food intake and energy homeostasis. As was found, regular physical activity may make available significant improvements to cognitive functions in various behavioral situations. Anxiety is one of the main concerns of the modern world, affecting millions of individuals’ health. There are contradictory results regarding ghrelin's effects on anxiety-like behavior, and the plasma level of this peptide is increased during physical activity. Here we aimed to evaluate the coincident effects of exogenous ghrelin and aerobic exercise on anxiety-like behavior and passive avoidance memory in Wistar rats. Forty-five male Wistar rats (250 ± 20 g) were divided into 9 groups (n=5) and received intra-hippocampal injections of 3.0 nmol ghrelin and performed aerobic exercise training for 8 weeks. Control groups received the same volume of saline and diazepam as negative and positive control groups, respectively. Learning and memory were estimated using a shuttle box apparatus, and anxiety-like behavior was recorded by an elevated plus-maze test (EPM). Data were analyzed by ANOVA test, and p<0.05 was considered significant. Our findings showed that the combined effect of ghrelin and aerobic exercise improves the acquisition, consolidation, and retrieval of passive avoidance memory in Wistar rats. Furthermore, it is supposed that the ghrelin receiving group spent less time in open arms and fewer open arms entries compared with the control group (p<0.05). However, exercising Wistar rats spent more time in the open arm zone in comparison with the control group (p<0.05). The exercise + Ghrelin administration established reduced anxiety (p<0.05). The results of this study demonstrate that aerobic exercise contributes to an increase in the endogenous production of ghrelin, and physical activity alleviates anxiety-related behaviors induced by intra-hippocampal injection of ghrelin. In general, exercise and ghrelin can reduce anxiety and improve memory.

Keywords: anxiety, ghrelin, aerobic exercise, learning, passive avoidance memory

Procedia PDF Downloads 120
3031 Evaluating Gender Sensitivity and Policy: Case Study of an EFL Textbook in Armenia

Authors: Ani Kojoyan

Abstract:

Linguistic studies have been investigating a connection between gender and linguistic development since 1970s. Scholars claim that gender differences in first and second language learning are socially constructed. Recent studies to language learning and gender reveal that second language acquisition is also a social phenomenon directly influencing one’s gender identity. Those responsible for designing language learning-teaching materials should be encouraged to understand the importance of and address the gender sensitivity accurately in textbooks. Writing or compiling a textbook is not an easy task; it requires strong academic abilities, patience, and experience. For a long period of time Armenia has been involved in the compilation process of a number of foreign language textbooks. However, there have been very few discussions or evaluations of those textbooks which will allow specialists to theorize that practice. The present paper focuses on the analysis of gender sensitivity issues and policy aspects involved in an EFL textbook. For the research the following material has been considered – “A Basic English Grammar: Morphology”, first printed in 2011. The selection of the material is not accidental. First, the mentioned textbook has been widely used in university teaching over years. Secondly, in Armenia “A Basic English Grammar: Morphology” has considered one of the most successful English grammar textbooks in a university teaching environment and served a source-book for other authors to compile and design their textbooks. The present paper aims to find out whether an EFL textbook is gendered in the Armenian teaching environment, and whether the textbook compilers are aware of gendered messages while compiling educational materials. It also aims at investigating students’ attitude toward the gendered messages in those materials. And finally, it also aims at increasing the gender sensitivity among book compilers and educators in various educational settings. For this study qualitative and quantitative research methods of analyses have been applied, the quantitative – in terms of carrying out surveys among students (45 university students, 18-25 age group), and the qualitative one – by discourse analysis of the material and conducting in-depth and semi-structured interviews with the Armenian compilers of the textbook (interviews with 3 authors). The study is based on passive and active observations and teaching experience done in a university classroom environment in 2014-2015, 2015-2016. The findings suggest that the discussed and analyzed teaching materials (145 extracts and examples) include traditional examples of intensive use of language and role-modelling, particularly, men are mostly portrayed as active, progressive, aggressive, whereas women are often depicted as passive and weak. These modeled often serve as a ‘reliable basis’ for reinforcing the traditional roles that have been projected on female and male students. The survey results also show that such materials contribute directly to shaping learners’ social attitudes and expectations around issues of gender. The applied techniques and discussed issues can be generalized and applied to other foreign language textbook compilation processes, since those principles, regardless of a language, are mostly the same.

Keywords: EFL textbooks, gender policy, gender sensitivity, qualitative and quantitative research methods

Procedia PDF Downloads 195
3030 A Case Study Comparing the Effect of Computer Assisted Task-Based Language Teaching and Computer-Assisted Form Focused Language Instruction on Language Production of Students Learning Arabic as a Foreign Language

Authors: Hanan K. Hassanein

Abstract:

Task-based language teaching (TBLT) and focus on form instruction (FFI) methods were proven to improve quality and quantity of immediate language production. However, studies that compare between the effectiveness of the language production when using TBLT versus FFI are very little with results that are not consistent. Moreover, teaching Arabic using TBLT is a new field with few research that has investigated its application inside classrooms. Furthermore, to the best knowledge of the researcher, there are no prior studies that compared teaching Arabic as a foreign language in a classroom setting using computer-assisted task-based language teaching (CATBLT) with computer-assisted form focused language instruction (CAFFI). Accordingly, the focus of this presentation is to display CATBLT and CAFFI tools when teaching Arabic as a foreign language as well as demonstrate an experimental study that aims to identify whether or not CATBLT is a more effective instruction method. The effectiveness will be determined through comparing CATBLT and CAFFI in terms of accuracy, lexical complexity, and fluency of language produced by students. The participants of the study are 20 students enrolled in two intermediate-level Arabic as a foreign language classes. The experiment will take place over the course of 7 days. Based on a study conducted by Abdurrahman Arslanyilmaz for teaching Turkish as a second language, an in-house computer assisted tool for the TBLT and another one for FFI will be designed for the experiment. The experimental group will be instructed using the in-house CATBLT tool and the control group will be taught through the in-house CAFFI tool. The data that will be analyzed are the dialogues produced by students in both the experimental and control groups when completing a task or communicating in conversational activities. The dialogues of both groups will be analyzed to understand the effect of the type of instruction (CATBLT or CAFFI) on accuracy, lexical complexity, and fluency. Thus, the study aims to demonstrate whether or not there is an instruction method that positively affects the language produced by students learning Arabic as a foreign language more than the other.

Keywords: computer assisted language teaching, foreign language teaching, form-focused instruction, task based language teaching

Procedia PDF Downloads 252
3029 Influence of Reinforcement Stiffness on the Performance of Back-to-Back Reinforced Earth Wall upon Rainwater Infiltration

Authors: Gopika Rajagopal, Sudheesh Thiyyakkandi

Abstract:

Back-to-back reinforced earth (RE) walls are extensively used in these days as bridge abutments and highway ramps, owing to their cost efficiency and ease of construction. High quality select fill is the most suitable backfill material due to its excellent engineering properties and constructability. However, industries are compelled to use low quality, locally available soil because of its ample availability on site. However, several failure cases of such walls are reported, especially subsequent to rainfall events. The stiffness of reinforcement is one of the major factors affecting the performance of RE walls. The present study focused on analyzing the effect of reinforcement stiffness on the performance of complete select fill, complete marginal fill, and hybrid-fill (i.e., combination of select and marginal fills) back-to-back RE walls, immediately after construction and upon rainwater infiltration through finite element modelling. A constant width to height (W/H) ratio of 3 and height (H) of 6 m was considered for the numerical analysis and the stiffness of reinforcement layers was varied from 500 kN/m to 10000 kN/m. Results showed that reinforcement stiffness had a noticeable influence on the response of RE wall, subsequent to construction as well as rainwater infiltration. Facing displacement was found to decrease and maximum reinforcement tension and factor of safety were observed to increase with increasing the stiffness of reinforcement. However, beyond a stiffness of 5000 kN/m, no significant reduction in facing displacement was observed. The behavior of fully marginal fill wall considered in this study was found to be reasonable even after rainwater infiltration when the high stiffness reinforcement layers are used.

Keywords: back-to-back reinforced earth wall, finite element modelling, rainwater infiltration, reinforcement stiffness

Procedia PDF Downloads 155
3028 Learning from Flood: A Case Study of a Frequently Flooded Village in Hubei, China

Authors: Da Kuang

Abstract:

Resilience is a hotly debated topic in many research fields (e.g., engineering, ecology, society, psychology). In flood management studies, we are experiencing the paradigm shift from flood resistance to flood resilience. Flood resilience refers to tolerate flooding through adaptation or transformation. It is increasingly argued that our city as a social-ecological system holds the ability to learn from experience and adapt to flood rather than simply resist it. This research aims to investigate what kinds of adaptation knowledge the frequently flooded village learned from past experience and its advantages and limitations in coping with floods. The study area – Xinnongcun village, located in the west of Wuhan city, is a linear village and continuously suffered from both flash flood and drainage flood during the past 30 years. We have a field trip to the site in June 2017 and conducted semi-structured interviews with local residents. Our research summarizes two types of adaptation knowledge that people learned from the past floods. Firstly, at the village scale, it has formed a collective urban form which could help people live during both flood and dry season. All houses and front yards were elevated about 2m higher than the road. All the front yards in the village are linked and there is no barrier. During flooding time, people walk to neighbors through houses yards and boat to outside village on the lower road. Secondly, at individual scale, local people learned tacit knowledge of preparedness and emergency response to flood. Regarding the advantages and limitations, the adaptation knowledge could effectively help people to live with flood and reduce the chances of getting injuries. However, it cannot reduce local farmers’ losses on their agricultural land. After flood, it is impossible for local people to recover to the pre-disaster state as flood emerges during June and July will result in no harvest. Therefore, we argue that learning from past flood experience could increase people’s adaptive capacity. However, once the adaptive capacity cannot reduce people’s losses, it requires a transformation to a better regime.

Keywords: adaptation, flood resilience, tacit knowledge, transformation

Procedia PDF Downloads 334
3027 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas

Authors: J. Szolomicki, H. Golasz-Szolomicka

Abstract:

The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.

Keywords: core structures, damping system, high-rise building, seismic zone

Procedia PDF Downloads 175
3026 Analysis of Atomic Models in High School Physics Textbooks

Authors: Meng-Fei Cheng, Wei Fneg

Abstract:

New Taiwan high school standards emphasize employing scientific models and modeling practices in physics learning. However, to our knowledge. Few studies address how scientific models and modeling are approached in current science teaching, and they do not examine the views of scientific models portrayed in the textbooks. To explore the views of scientific models and modeling in textbooks, this study investigated the atomic unit in different textbook versions as an example and provided suggestions for modeling curriculum. This study adopted a quantitative analysis of qualitative data in the atomic units of four mainstream version of Taiwan high school physics textbooks. The models were further analyzed using five dimensions of the views of scientific models (nature of models, multiple models, purpose of the models, testing models, and changing models); each dimension had three levels (low, medium, high). Descriptive statistics were employed to compare the frequency of describing the five dimensions of the views of scientific models in the atomic unit to understand the emphasis of the views and to compare the frequency of the eight scientific models’ use to investigate the atomic model that was used most often in the textbooks. Descriptive statistics were further utilized to investigate the average levels of the five dimensions of the views of scientific models to examine whether the textbooks views were close to the scientific view. The average level of the five dimensions of the eight atomic models were also compared to examine whether the views of the eight atomic models were close to the scientific views. The results revealed the following three major findings from the atomic unit. (1) Among the five dimensions of the views of scientific models, the most portrayed dimension was the 'purpose of models,' and the least portrayed dimension was 'multiple models.' The most diverse view was the 'purpose of models,' and the most sophisticated scientific view was the 'nature of models.' The least sophisticated scientific view was 'multiple models.' (2) Among the eight atomic models, the most mentioned model was the atomic nucleus model, and the least mentioned model was the three states of matter. (3) Among the correlations between the five dimensions, the dimension of 'testing models' was highly related to the dimension of 'changing models.' In short, this study examined the views of scientific models based on the atomic units of physics textbooks to identify the emphasized and disregarded views in the textbooks. The findings suggest how future textbooks and curriculum can provide a thorough view of scientific models to enhance students' model-based learning.

Keywords: atomic models, textbooks, science education, scientific model

Procedia PDF Downloads 158
3025 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 223
3024 Integrating Efficient Anammox with Enhanced Biological Phosphorus Removal Process Through Flocs Management for Sustainable Ultra-deep Nutrients Removal from Municipal Wastewater

Authors: Qiongpeng Dan, Xiyao Li, Qiong Zhang, Yongzhen Peng

Abstract:

The nutrients removal from wastewater is of great significance for global wastewater recycling and sustainable reuse. Traditional nitrogen and phosphorus removal processes are very dependent on the input of aeration and carbon sources, which makes it difficult to meet the low-carbon goal of energy saving and emission reduction. This study reported a proof-of-concept demonstration of integrating anammox and enhanced biological phosphorus removal (EBPR) by flocs management in a single-stage hybrid bioreactor (biofilms and flocs) for simultaneous nitrogen and phosphorus removal (SNPR). Excellent removal efficiencies of nitrogen (97.7±1.3%) and phosphorus (97.4±0.7%) were obtained in low C/N ratio (3.0±0.5) municipal wastewater treatment. Interestingly, with the loss of flocs, anammox bacteria (Ca. Brocadia) was highly enriched in biofilms, with relative and absolute abundances reaching up to 12.5% and 8.3×1010 copies/g dry sludge, respectively. The anammox contribution to nitrogen removal also rose from 32.6±9.8% to 53.4±4.2%. Endogenous denitrification by flocs was proven to be the main contributor to both nitrite and nitrate reduction, and flocs loss significantly promoted nitrite flow towards anammox, facilitating AnAOB enrichment. Moreover, controlling the floc's solid retention time at around 8 days could maintain a low poly-phosphorus level of 0.02±0.001 mg P/mg VSS in the flocs, effectively addressing the additional phosphorus removal burden imposed by the enrichment of phosphorus-accumulating organisms in biofilms. This study provides an update on developing a simple and feasible strategy for integrating anammox and EBPR for SNPR in mainstream municipal wastewater.

Keywords: anammox process, enhanced biological phosphorus removal, municipal wastewater, sustainable nutrients removal

Procedia PDF Downloads 51
3023 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 164
3022 Bench-scale Evaluation of Alternative-to-Chlorination Disinfection Technologies for the Treatment of the Maltese Tap-water

Authors: Georgios Psakis, Imren Rahbay, David Spiteri, Jeanice Mallia, Martin Polidano, Vasilis P. Valdramidis

Abstract:

Absence of surface water and progressive groundwater quality deterioration have exacerbated scarcity rapidly, making the Mediterranean island of Malta one of the most water-stressed countries in Europe. Water scarcity challenges have been addressed by reverse osmosis desalination of seawater, 60% of which is blended with groundwater to form the current potable tap-water supply. Chlorination has been the adopted method of water disinfection prior to distribution. However, with the Malteseconsumer chlorine sensory-threshold being as low as 0.34 ppm, presence of chorine residuals and chlorination by-products in the distributed tap-water impacts negatively on its organoleptic attributes, deterring the public from consuming it. As part of the PURILMA initiative, and with the aim of minimizing the impact of chlorine residual on the quality of the distributed water, UV-C, and hydrosonication, have been identified as cost- and energy-effective decontamination alternatives, paving the way for more sustainable water management. Bench-scale assessment of the decontamination efficiency of UV-C (254 nm), revealed 4.7-Log10 inactivation for both Escherichia coli and Enterococcus faecalis at 36 mJ/cm2. At >200 mJ/cm2fluence rates, there was a systematic 2-Log10 difference in the reductions exhibited by E. coli and E. faecalis to suggest that UV-C disinfection was more effective against E. coli. Hybrid treatment schemes involving hydrosonication(at 9.5 and 12.5 dm3/min flow rates with 1-5 MPa maximum pressure) and UV-C showed at least 1.1-fold greater bactericidal activity relative to the individualized UV-C treatments. The observed inactivation appeared to have stemmed from additive effects of the combined treatments, with hydrosonication-generated reactive oxygen species enhancing the biocidal activity of UV-C.

Keywords: disinfection, groundwater, hydrosonication, UV-C

Procedia PDF Downloads 172
3021 An Organocatalytic Construction of Vicinal Tetrasubstituted Stereocenters via Mannich Reaction of 2-Substituted Benzofuran-3-One with Isatin-Derived Ketimine

Authors: Koilpitchai Sivamuthuraman, Venkitasamy Kesavan

Abstract:

3-substituted 3-amino-2-oxindole skeleton bearing adjacent tetrasubstituted stereogenic centers is of great importance because of these heterocyclic motifs possess a wide range of pharmacological activity. The catalytic asymmetric construction of multi functionalised heterocyclic compound with adjacent tetrasubstituted stereocenters is one of the most difficult tasks in organic synthesis. To date, the most straightforward methodologies have been developed for synthesis of chiral 3-substituted 3-amino-2-oxindoles through the addition of carbon nucleophiles to isatin-derived ketimines. However, only a few successful examples have been described for the assembly of vicinal tetrasubstituted stereocenters using isatin derived ketimines as electrophiles. On the other hand, 2,2-Disubstituted benzofuran-3(2H)-ones and related frameworks are characteristic of a quaternary stereogenic center at C2 position present in quite a number of natural products and bioactive Molecules.Despite the intensive efforts devoted for the construction of 2,2-Disubstituted Benzofuran-3[2H]-one, there are only a few asymmetric methods such as organocatalytic Michael addition and enantioselective halogenations were reported till now. Due to the biological importance of oxindole and benzofuran-3-one, it is proposed here with the synthesis of hybrid molecule containing tetrasubstituted stereo centers through asymmetric organocatalysis. The addition of 2-substituted Benzofuran-3-one(1a) to isatin-derived ketimines(2a) using a bifunctional organocatalyst(catalyst IV or V), leading to chiral heterocyclic compounds containing both 3-amino 2-oxindole and benzofurn-3-one bearing vicinal quaternary stereocenters with good yields and excellent enantioselectivity. The present study extends the scope of the catalytic asymmetric Mannich reaction with isatin-derived ketimines, providing a new class of amino oxindole derivatives having benzofuran-3-one.

Keywords: asymmetric synthesis, benzofuran-3-one, isatin-derived ketimines, quaternary stereocenters

Procedia PDF Downloads 191
3020 Improving the Competency of Undergraduate Nursing Students in Addressing a Timely Public Health Issue

Authors: Tsu-Yin Wu, Jenni Hoffman, Lydia McMurrows, Sarah Lally

Abstract:

Recent events of the Flint Water Crisis and elevated lead levels in Detroit public school water have highlighted a specific public health disparity and shown the need for better education of healthcare providers on lead education. Identifying children and pregnant women with a high risk for lead poisoning and ensuring lead testing is completed is critical. The purpose of this study is to explore the impact of an educational intervention on knowledge and confidence levels among nursing students enrolled in the prelicensure Bachelor of Science in Nursing (BSN) and Registered Nurse to BSN program (R2B). The study used both quantitative and qualitative research methods to assess the impact of multi-modal pedagogy on knowledge and confidence of lead screening and prevention among prelicensure and R2B nursing students. The students received lead poisoning and prevention content in addition to completing an e-learning module developed by the Pediatric Environmental Health Specialty Units. A total of 115 students completed the pre-and post-test instrument that consisted of demographic, lead knowledge, and confidence items. Despite the increase of total knowledge, three dimensions of lead poisoning, and confidence from pre- to post-test scores for both groups, there was no statistical significance on the increase between prelicensure and R2B students. Thematic analysis of qualitative data showed five themes from participants' learning experiences: lead exposure, signs and symptoms of lead poisoning, screening and diagnosis, prevention, and policy and statewide issues. The study is limited by a small sample and participants recalling some correct answers from the pretest, thus, scoring higher on the post-test. The results contribute to the minimally existent literature examining a critical public health concern regarding lead health exposure and prevention education of nursing students. Incorporating such content area into the nursing curriculum is essential in ensuring that such public health disparities are mitigated.

Keywords: lead poisoning, emerging public health issue, community health, nursing edducation

Procedia PDF Downloads 197
3019 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 297
3018 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections

Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang

Abstract:

Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.

Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling

Procedia PDF Downloads 158
3017 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
3016 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 175
3015 An Investigation into the Views of Gifted Children on the Effects of Computer and Information Technologies on Their Lives and Education

Authors: Ahmet Kurnaz, Eyup Yurt, Ümit Çiftci

Abstract:

In this study, too, an attempt was made to reveal the place and effects of information technologies on the lives and education of gifted children based on the views of gifted. To this end, the effects of information technologies on gifted are general skills, technology use, academic and social skills, and cooperative and personal skills were investigated. These skills were explored depending on whether or not gifted had their own computers, had internet connection at home, or how often they use the internet, average time period they spent at the computer, how often they played computer games and their use of social media. The study was conducted using the screening model with a quantitative approach. The sample of the study consisted of 129 gifted attending 5-12th classes in 12 provinces in different regions of Turkey. 64 of the participants were female while 65 were male. The research data were collected using the using computer of gifted and information technologies (UCIT) questionnaire which was developed by the researchers and given its final form after receiving expert view. As a result of the study, it was found that UCIT use improved foreign language speaking skills of gifted, enabled them to get to know and understand different cultures, and made use of computer and information technologies while they study. At the end of the study these result were obtained: Gifted have positive idea using computer and communication technology. There are differences whether using the internet about the ideas UCIT. But there are not differences whether having computer, inhabited city, grade level, having internet at home, daily and weekly internet usage durations, playing the computer and internet game, having Facebook and Twitter account about the UCIT. UCIT contribute to the development of gifted vocabulary, allows knowing and understand different cultures, developing foreign language speaking skills, gifted do not give up computer when they do their homework, improve their reading, listening, understanding and writing skills in a foreign language. Gifted children want to have transition to the use of tablets in education. They think UCIT facilitates doing their homework, contributes learning more information in a shorter time. They'd like to use computer-assisted instruction programs at courses. They think they will be more successful in the future if their computer skills are good. But gifted students prefer teacher instead of teaching with computers and they said that learning can be run from home without going to school.

Keywords: gifted, using computer, communication technology, information technologies

Procedia PDF Downloads 390
3014 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 518
3013 A Trail of Decoding a Classical Riddle: An Analysis of Russian Military Strategy

Authors: Karin Megheșan, Alexandra Popescu, Teodora Dobre

Abstract:

In the past few years, the Russian Federation has become a central point on the security agenda of the most important international actors, due to its reloaded aggressiveness of foreign policy. Vladimir Putin, the actual president of the Russian Federation, has proven that Russia can and has the willingness to become the powerful actor that used to be during the Cold War. Russia’s new behavior on the international scene showed that Russia has not only expansionist (where expansionist is not only in terms of territory but also of ideology) intentions, but also the necessary resources, to build an empire that may have the power to counterbalance the influence of the United States and stop the expansion of the North-Atlantic Treaty Organization in an equation understood of multipolar Russian view. But in order to do this, there is necessary to follow a well-established plan or policy. Thus, the aim of the paper is to discuss how has the foreign policy of the Russian Federation evolved under the influence of the military and security strategies of the Russian nation, to briefly examine some of the factors that sculpture Russian foreign policy and behavior, in order to reshape a Russian (Soviet) profile so far considered antiquated. Our approach is an argument in favor of the analyses of the recent evolutions embedded in the course of history. In this context, the paper will include analytical thoughts about the Russian foreign policy and the latest strategic documents (security strategy and military doctrine) adopted by the Putin administration, with the purpose to highlight the main direction of action followed by all these documents together. The paper concludes that the military component is to be found in all these strategic documents, as well as at the core of Russian national interest, aspect that proves that Russia is still the adept of the traditional realist paradigm, reshaped in a Russian theory of the multipolar world.

Keywords: hybrid warfare, military component, military doctrine, Russian foreign policy, security strategy

Procedia PDF Downloads 303
3012 A Detailed Computational Investigation into Copper Catalyzed Sonogashira Coupling Reaction

Authors: C. Rajalakshmi, Vibin Ipe Thomas

Abstract:

Sonogashira coupling reactions are widely employed in the synthesis of molecules of biological and pharmaceutical importance. Copper catalyzed Sonogashira coupling reactions are gaining importance owing to the low cost and less toxicity of copper as compared to the palladium catalyst. In the present work, a detailed computational study has been carried out on the Sonogashira coupling reaction between aryl halides and terminal alkynes catalyzed by Copper (I) species with trans-1, 2 Diaminocyclohexane as ligand. All calculations are performed at Density Functional Theory (DFT) level, using the hybrid Becke3LYP functional. Cu and I atoms are described using an effective core potential (LANL2DZ) for the inner electrons and its associated double-ζ basis set for the outer electrons. For all other atoms, 6-311G+* basis set is used. We have identified that the active catalyst species is a neutral 3-coordinate trans-1,2 diaminocyclohexane ligated Cu (I) alkyne complex and found that the oxidative addition and reductive elimination occurs in a single step proceeding through one transition state. This is owing to the ease of reductive elimination involving coupling of Csp2-Csp carbon atoms and the less stable Cu (III) intermediate. This shows the mechanism of copper catalyzed Sonogashira coupling reactions are quite different from those catalyzed by palladium. To gain further insights into the mechanism, substrates containing various functional groups are considered in our study to traverse their effect on the feasibility of the reaction. We have also explored the effect of ligand on the catalytic cycle of the coupling reaction. The theoretical results obtained are in good agreement with the experimental observation. This shows the relevance of a combined theoretical and experimental approach for rationally improving the cross-coupling reaction mechanisms.

Keywords: copper catalysed, density functional theory, reaction mechanism, Sonogashira coupling

Procedia PDF Downloads 116
3011 Site-Specific Delivery of Hybrid Upconversion Nanoparticles for Photo-Activated Multimodal Therapies of Glioblastoma

Authors: Yuan-Chung Tsai, Masao Kamimura, Kohei Soga, Hsin-Cheng Chiu

Abstract:

In order to enhance the photodynamic/photothermal therapeutic efficacy on glioblastoma, the functionized upconversion nanoparticles with the capability of converting the deep tissue penetrating near-infrared light into visible wavelength for activating photochemical reaction were developed. The drug-loaded nanoparticles (NPs) were obtained from the self-assembly of oleic acid-coated upconversion nanoparticles along with maleimide-conjugated poly(ethylene glycol)-cholesterol (Mal-PEG-Chol), as the NP stabilizer, and hydrophobic photosensitizers, IR-780 (for photothermal therapy, PTT) and mTHPC (for photodynamic therapy, PDT), in aqueous phase. Both the IR-780 and mTHPC were loaded into the hydrophobic domains within NPs via hydrophobic association. The peptide targeting ligand, angiopep-2, was further conjugated with the maleimide groups at the end of PEG adducts on the NP surfaces, enabling the affinity coupling with the low-density lipoprotein receptor-related protein-1 of tumor endothelial cells and malignant astrocytes. The drug-loaded NPs with the size of ca 80 nm in diameter exhibit a good colloidal stability in physiological conditions. The in vitro data demonstrate the successful targeting delivery of drug-loaded NPs toward the ALTS1C1 cells (murine astrocytoma cells) and the pronounced cytotoxicity elicited by combinational effect of PDT and PTT. The in vivo results show the promising brain orthotopic tumor targeting of drug-loaded NPs and sound efficacy for brain tumor dual-modality treatment. This work shows great potential for improving photodynamic/photothermal therapeutic efficacy of brain cancer.

Keywords: drug delivery, orthotopic brain tumor, photodynamic/photothermal therapies, upconversion nanoparticles

Procedia PDF Downloads 195