Search results for: hardy cross networks accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9693

Search results for: hardy cross networks accuracy

6813 Infrastructure Development – Stages in Development

Authors: Seppo Sirkemaa

Abstract:

Information systems infrastructure is the basis of business systems and processes in the company. It should be a reliable platform for business processes and activities but also have the flexibility to change business needs. The development of an infrastructure that is robust, reliable, and flexible is a challenge. Understanding technological capabilities and business needs is a key element in the development of successful information systems infrastructure.

Keywords: development, information technology, networks, technology

Procedia PDF Downloads 112
6812 Vibro-Tactile Equalizer for Musical Energy-Valence Categorization

Authors: Dhanya Nair, Nicholas Mirchandani

Abstract:

Musical haptic systems can enhance a listener’s musical experience while providing an alternative platform for the hearing impaired to experience music. Current music tactile technologies focus on representing tactile metronomes to synchronize performers or encoding musical notes into distinguishable (albeit distracting) tactile patterns. There is growing interest in the development of musical haptic systems to augment the auditory experience, although the haptic-music relationship is still not well understood. This paper represents a tactile music interface that provides vibrations to multiple fingertips in synchronicity with auditory music. Like an audio equalizer, different frequency bands are filtered out, and the power in each frequency band is computed and converted to a corresponding vibrational strength. These vibrations are felt on different fingertips, each corresponding to a different frequency band. Songs with music from different spectrums, as classified by their energy and valence, were used to test the effectiveness of the system and to understand the relationship between music and tactile sensations. Three participants were trained on one song categorized as sad (low energy and low valence score) and one song categorized as happy (high energy and high valence score). They were trained both with and without auditory feedback (listening to the song while experiencing the tactile music on their fingertips and then experiencing the vibrations alone without the music). The participants were then tested on three songs from both categories, without any auditory feedback, and were asked to classify the tactile vibrations they felt into either category. The participants were blinded to the songs being tested and were not provided any feedback on the accuracy of their classification. These participants were able to classify the music with 100% accuracy. Although the songs tested were on two opposite spectrums (sad/happy), the preliminary results show the potential of utilizing a vibrotactile equalizer, like the one presented, for augmenting musical experience while furthering the current understanding of music tactile relationship.

Keywords: haptic music relationship, tactile equalizer, tactile music, vibrations and mood

Procedia PDF Downloads 169
6811 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 327
6810 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience

Authors: Al-Amin, Huanjun Jiang, Anayat Ali

Abstract:

Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.

Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network

Procedia PDF Downloads 82
6809 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.

Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence

Procedia PDF Downloads 124
6808 A Cross-Sectional Study on Management of Common Mental Disorders Among Patients Living with HIV/AIDS Attending Antiretroviral Treatment (ART) Clinic in Hoima Regional Referral Hospital Uganda

Authors: Agodo Mugenyi Herbert

Abstract:

Background: A high prevalence of both HIV infection and mental disorders exists in Sub-Saharan Africa, however there is little integration of care for mental health disorders among HIV-infected individuals. The study aimed at determining the management of common mental disorders among HIV/AIDS clients attending Antiretroviral clinic in Hoima regional referral hospital. Significancy of the study: The information generated by this study would help mental health advocates, ministry of health, Civil society organizations in HIV programming to advocate for enhanced mental health care for PLWHA. The result will be used in policy development and lobbying for integration of mental health care in HIV/AIDS care. Methods: This study applied a cross sectional design. It involved data collection from clients with HIV/AIDS attending ART clinic in Hoima regional referral hospital at one specific point in time. It aimed at providing data on the entire population under study. Data was collected from Hoima Regional Referral Hospital at the ART clinic. Data analysis was performed using SPSS version 24. Results: 66 HIV/AIDS clients and 10 health workers in the ART clinic who participated fully completed the study. The overall prevalence of at least one form of mental disorder was 83%. Majority of the health care practitioner do not use pharmacological, psychological, and social interventions to manage such disorders. Conclusion: These results are suggestive of a significant proportion of the HIV-infected patients experiencing psychological difficulty for which they do not receive treatment Recommendations: Current care practices applied to patients with HIV/AIDS should be integrated more generally to include treatment services to identify and manage common mental disorders.

Keywords: common mental disorders, mental health, mental illness, and severe mental illness

Procedia PDF Downloads 68
6807 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 267
6806 The Growth of E-Commerce and Online Dispute Resolution in Developing Nations: An Analysis

Authors: Robin V. Cupido

Abstract:

Online dispute resolution has been identified in many countries as a viable alternative for resolving conflicts which have arisen in the so-called digital age. This system of dispute resolution is developing alongside the Internet, and as new types of transactions are made possible by our increased connectivity, new ways of resolving disputes must be explored. Developed nations, such as the United States of America and the European Union, have been involved in creating these online dispute resolution mechanisms from the outset, and currently have sophisticated systems in place to deal with conflicts arising in a number of different fields, such as e-commerce, domain name disputes, labour disputes and conflicts arising from family law. Specifically, in the field of e-commerce, the Internet’s borderless nature has served as a way to promote cross-border trade, and has created a global marketplace. Participation in this marketplace boosts a country’s economy, as new markets are now available, and consumers can transact from anywhere in the world. It would be especially advantageous for developing nations to be a part of this global marketplace, as it could stimulate much-needed investment in these nations, and encourage international co-operation and trade. However, for these types of transactions to proliferate, an effective system for resolving the inevitable disputes arising from such an increase in e-commerce is needed. Online dispute resolution scholarship and practice is flourishing in developed nations, and it is clear that the gap is widening between developed and developing nations in this regard. The potential for implementing online dispute resolution in developing countries has been discussed, but there are a number of obstacles that have thus far prevented its continued development. This paper aims to evaluate the various political, infrastructural and socio-economic challenges faced in developing nations, and to question how these have impacted the acceptance and development of online dispute resolution, scholarship and training of online dispute resolution practitioners and, ultimately, developing nations’ readiness to participate in cross-border e-commerce.

Keywords: developing countries, feasibility, online dispute resolution, progress

Procedia PDF Downloads 301
6805 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 526
6804 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process

Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand

Abstract:

This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.

Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping

Procedia PDF Downloads 48
6803 Geometry of the Right Ventricular Outflow Tract - Clinical Significance in Electrocardiological Procedures

Authors: Marcin Jakiel, Maria Kurek, Karolina Gutkowska, Sylwia Sanakiewicz, Dominika Stolarczyk, Jakub Batko, Rafał Jakiel, Mateusz K. Hołda

Abstract:

The geometry of RVOT is extremely complicated. It is an irregular block with an ellipsoidal cross-section, whose dimensions decrease toward the pulmonary valve and measure 33.82 (IQR 30,51-39,36), 28.82 (IQR 26,11-32,22), 27.95 ± 4,11 for width [mm] and 33.41 ± 6,14, 26.99 ± 4,41, 26.91 ± 4,00 [mm] for depth, in the basal, middle and subpulmonary parts, respectively. In a sagittal section view, the RVOT heads upward and slightly backward. Its anterior perimeter has an average length of 41.96 mm and inclines to the transverse plane at an angle of 50.77° (IQR 46,53°-58,70°). In the posterior region, the RVOT is shorter (18.17mm) and flexes anteriorly. Therefore, the slope of the upper part of the rear wall to the transverse plane is an acute angle (open toward the rear) of 44,58° (IQR 37,30°-51,25°), while in the lower part it is an angle close to a right angle of 94,30°±15,44°. In addition, the thickness of the RVOT wall in the diastolic phase, at the posterior perimeter at the base, in the middle of the length and subpulmonary measure 3,80 mm ± 0,88 mm, 3,56 mm ± 0,73 mm, 3,56 mm ± 0,65 mm, respectively. In frontal cross-section, the RVOT rises on the interventricular septum, which makes it possible to distinguish the septal and supraseptal parts on its left periphery. The angles (facing the vertices to the right) of the inclination of these parts to the transverse plane are 75.5° (IQR 66,44°-81,11°) and 107.01° (IQR 99,09 – 115,23°), respectively, which allows us to conclude that the direction of the RVOT long axis changes from left to right. The above analysis shows that there is no single RVOT axis. Two axes can be distinguished, the one for the upper RVOT being more backward and leftward. The aforementioned forward deflection of the posterior wall and the RVOT's elevation over the interventricular septum, suggest that access to the subpulmonary region may be difficult. It should be emphasized that this area is often the target for ablation of ventricular arrhythmias. The small thickness of the RVOT posterior wall, with its difficult geometry, may favor its perforation into the pericardium or ascending aorta.

Keywords: angle, geometry, operation access, position, RVOT, shape

Procedia PDF Downloads 105
6802 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 96
6801 The Development of Explicit Pragmatic Knowledge: An Exploratory Study

Authors: Aisha Siddiqa

Abstract:

The knowledge of pragmatic practices in a particular language is considered key to effective communication. Unlike one’s native language where this knowledge is acquired spontaneously, more conscious attention is required to learn second language pragmatics. Traditional foreign language (FL) classrooms generally focus on the acquisition of vocabulary and lexico-grammatical structures, neglecting pragmatic functions that are essential for effective communication in the multilingual networks of the modern world. In terms of effective communication, of particular importance is knowledge of what is perceived as polite or impolite in a certain language, an aspect of pragmatics which is not perceived as obligatory but is nonetheless indispensable for successful intercultural communication and integration. While learning a second language, the acquisition of politeness assumes more prominence as the politeness norms and practices vary according to language and culture. Therefore, along with focusing on the ‘use’ of politeness strategies, it is crucial to examine the ‘acquisition’ and the ‘acquisitional development’ of politeness strategies by second language learners, particularly, by lower proficiency leaners as the norms of politeness are usually focused in lower levels. Hence, there is an obvious need for a study that not only investigates the acquisition of pragmatics by young FL learners using innovative multiple methods; but also identifies the potential causes of the gaps in their development. The present research employs a cross sectional design to explore the acquisition of politeness by young English as a foreign language learners (EFL) in France; at three levels of secondary school learning. The methodology involves two phases. In the first phase a cartoon oral production task (COPT) is used to elicit samples of requests from young EFL learners in French schools. These data are then supplemented by a) role plays, b) an analysis of textbooks, and c) video recordings of classroom activities. This mixed method approach allows us to explore the repertoire of politeness strategies the learners possess and delve deeper into the opportunities available to learners in classrooms to learn politeness strategies in requests. The paper will provide the results of the analysis of COPT data for 250 learners at three different stages of English as foreign language development. Data analysis is based on categorization of requests developed in CCSARP project. The preliminary analysis of the COPT data shows that there is substantial evidence of pragmalinguistic development across all levels but the developmental process seems to gain momentum in the second half of the secondary school period as compared to the early period at school. However, there is very little evidence of sociopragmatic development. The study aims to document the current classroom practices in France by looking at the development of young EFL learner’s politeness strategies across three levels of secondary schools.

Keywords: acquisition, English, France, interlanguage pragmatics, politeness

Procedia PDF Downloads 418
6800 A Reflective Investigation on the Course Design and Coaching Strategy for Creating a Trans-Disciplinary Leaning Environment

Authors: Min-Feng Hsieh

Abstract:

Nowadays, we are facing a highly competitive environment in which the situation for survival has come even more critical than ever before. The challenge we will be confronted with is no longer can be dealt with the single system of knowledge. The abilities we urgently need to acquire is something that can lead us to cross over the boundaries between different disciplines and take us to a neutral ground that gathers and integrates powers and intelligence that surrounds us. This paper aims at discussing how a trans-disciplinary design course organized by the College of Design at Chaoyang University can react to this modern challenge. By orchestrating an experimental course format and by developing a series of coaching strategies, a trans-disciplinary learning environment has been created and practiced in which students selected from five different departments, including Architecture, Interior Design, Visual Design, Industrial Design, Landscape and Urban Design, are encouraged to think outside their familiar knowledge pool and to learn with/from each other. In the course of implementing this program, a parallel research has been conducted alongside by adopting the theory and principles of Action Research which is a research methodology that can provide the course organizer emergent, responsive, action-oriented, participative and critically reflective insights for the immediate changes and amendments in order to improve the effect of teaching and learning experience. In the conclusion, how the learning and teaching experience of this trans-disciplinary design studio can offer us some observation that can help us reflect upon the constraints and division caused by the subject base curriculum will be pointed out. A series of concepts for course design and teaching strategies developed and implemented in this trans-disciplinary course are to be introduced as a way to promote learners’ self-motivated, collaborative, cross-disciplinary and student-centered learning skills. The outcome of this experimental course can exemplify an alternative approach that we could adopt in pursuing a remedy for dealing with the problematic issues of the current educational practice.

Keywords: course design, coaching strategy, subject base curriculum, trans-disciplinary

Procedia PDF Downloads 198
6799 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing

Authors: Amal Sellami, Ahlem Ammar

Abstract:

Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.

Keywords: collaboration, writing, collaborative planning, collaborative reviewing

Procedia PDF Downloads 93
6798 Investigation of a Natural Convection Heat Sink for LEDs Based on Micro Heat Pipe Array-Rectangular Channel

Authors: Wei Wang, Yaohua Zhao, Yanhua Diao

Abstract:

The exponential growth of the lighting industry has rendered traditional thermal technologies inadequate for addressing the thermal management challenges inherent to high-power light-emitting diode (LED) technology. To enhance the thermal management of LEDs, this study proposes a heat sink configuration that integrates a miniature heat pipe array based on phase change technology with rectangular channels. The thermal performance of the heat sink was evaluated through experimental testing, and the results demonstrated that when the input power was 100W, 150W, and 200W, the temperatures of the LED substrate were 47.64℃, 56.78℃, and 69.06℃, respectively. Additionally, the maximum temperature difference of the MHPA in the vertical direction was observed to be 0.32℃, 0.30℃, and 0.30℃, respectively. The results demonstrate that the heat sink not only effectively dissipates the heat generated by the LEDs, but also exhibits excellent temperature uniformity. In consideration of the experimental measurement outcomes, a corresponding numerical model was developed as part of this study. Following the model validation, the effect of the structural parameters of the heat sink on its heat dissipation efficacy was examined through the use of response surface methodology (RSM) analysis. The rectangular channel width, channel height, channel length, number of channel cross-sections, and channel cross-section spacing were selected as the input parameters, while the LED substrate temperature and the total mass of the heat sink were regarded as the response variables. Subsequently, the response was subjected to an analysis of variance (ANOVA), which yielded a regression model that predicted the response based on the input variables. This offers some direction for the design of the radiator.

Keywords: light-emitting diodes, heat transfer, heat pipe, natural convection, response surface methodology

Procedia PDF Downloads 25
6797 Maximum Power and Bone Variables in Young Adult Men

Authors: Anthony Khawaja, Jacques Prioux, Ghassan Maalouf, Rawad El Hage

Abstract:

The regular practice of physical activities characterized by significant mechanical stresses stimulates bone formation and improves bone mineral density (BMD) in the most solicited sites. The purpose of this study was to explore the relationships between maximum power and bone variables in a group of young adult men. Identification of new determinants of BMD, bone mineral content (BMC) and hip geometric indices in young adult men, would allow screening and early management of future cases of osteopenia and osteoporosis. Fifty-three young adult men (18 – 35yr) voluntarily participated in this study. Weight and height were measured, and body mass index was calculated. Body composition, BMC and BMD were determined for each individual by Dual-energy X-ray absorptiometry (DXA; GE Healthcare, Madison, WI) at whole body (WB), lumbar spine (L1-L4), total hip (TH), and femoral neck (FN). FN cross-sectional area (CSA), strength index (SI), buckling ratio (BR), FN section modulus (Z), cross-sectional moment of inertia (CSMI) and L1-L4 TBS were also evaluated by DXA. The vertical jump was evaluated using a field test (sargent test). Two main parameters were retained: vertical jump performance (cm) and power (w). The subjects performed three jumps with 2 minutes of recovery between jumps. The highest vertical jump was selected. Maximum power (P max, in watts) was calculated. Maximum power was positively correlated to WB BMD (r = 0.41; p < 0.01), WB BMC (r = 0.65; p < 0.001), L1-L4 BMC (r = 0.54; p < 0.001), FN BMC (r = 0.35; p < 0.01), TH BMC (r = 0.50; p < 0.001), CSMI (r = 0.50; p < 0.001), CSA (r = 0.33; p < 0.05). Vertical jump was positively correlated to WB BMC (r = 0.31; p < 0.05), L1-L4 BMC (r = 0.40; p < 0.01), CSMI (r = 0.29; p < 0.05). The current study suggests that maximum power is a positive determinant of BMD, BMC and hip geometric indices in young adult men. In addition, it shows also that maximum power is a stronger positive determinant of bone variables than vertical jump in this population. Implementing strategies to increase maximum power in young adult men may be useful for preventing osteoporotic fractures later in life.

Keywords: bone variables, maximum power, osteopenia, osteoporosis, vertical jump, young adult men

Procedia PDF Downloads 175
6796 A Comparison of Outcomes of Endoscopic Retrograde Cholangiopancreatography vs. Percutaneous Transhepatic Biliary Drainage in the Management of Obstructive Jaundice from Hepatobiliary Tuberculosis: The Philippine General Hospital Experience

Authors: Margaret Elaine J. Villamayor, Lobert A. Padua, Neil S. Bacaltos, Virgilio P. Bañez

Abstract:

Significance: This study aimed to determine the prevalence of Hepatobiliary Tuberculosis (HBTB) with biliary obstruction and to compare the outcomes of ERCP versus PTBD in these patients. Methodology: This is a cross-sectional study involving patients from PGH who underwent biliary drainage from HBTB from January 2009 to June 2014. HBTB was defined as having evidence of TB (culture, smear, PCR, histology) or clinical diagnosis with the triad of jaundice, fever, and calcifications on imaging with other causes of jaundice excluded. The primary outcome was successful drainage and secondary outcomes were mean hospital stay and complications. Simple logistic regression was used to identify factors associated with success of drainage, z-test for two proportions to compare outcomes of ERCP versus PTBD and t-test to compare mean hospital stay post-procedure. Results: There were 441 patients who underwent ERCP and PTBD, 19 fulfilled the inclusion criteria. 11 underwent ERCP while 8 had PTBD. There were more successful cases in PTBD versus ERCP but this was not statistically significant (p-value 0.3615). Factors such as age, gender, location and nature of obstruction, vices, coexisting pulmonary or other extrapulmonary TB and presence of portal hypertension did not affect success rates in these patients. The PTBD group had longer mean hospital stay but this was not significant (p-value 0.1880). There were no complications reported in both groups. Conclusion: HBTB comprises 4.3% of the patients undergoing biliary drainage in PGH. Both ERCP and PTBD are equally safe and effective in the management of biliary obstruction from HBTB.

Keywords: cross-sectional, hepatobiliary tuberculosis, obstructive jaundice, endoscopic retrograde cholangiopancreatography, percutaneous transhepatic biliary drainage

Procedia PDF Downloads 441
6795 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application

Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro

Abstract:

In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.

Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype

Procedia PDF Downloads 156
6794 Work Related and Psychosocial Risk Factors for Musculoskeletal Disorders among Workers in an Automated flexible Assembly Line in India

Authors: Rohin Rameswarapu, Sameer Valsangkar

Abstract:

Background: Globally, musculoskeletal disorders are the largest single cause of work-related illnesses accounting for over 33% of all newly reported occupational illnesses. Risk factors for MSD need to be delineated to suggest means for amelioration. Material and methods: In this current cross-sectional study, the prevalence of MSDs among workers in an electrical company assembly line, the socio-demographic and job characteristics associated with MSD were obtained through a semi-structured questionnaire. A quantitative assessment of the physical risk factors through the Rapid Upper Limb Assessment (RULA) tool, and measurement of psychosocial risk factors through a Likert scale was obtained. Statistical analysis was conducted using Epi-info software and descriptive and inferential statistics including chi-square and unpaired t test were obtained. Results: A total of 263 workers consented and participated in the study. Among these workers, 200 (76%) suffered from MSD. Most of the workers were aged between 18–27 years and majority of the workers were women with 198 (75.2%) of the 263 workers being women. A chi square test was significant for association between male gender and MSD with a P value of 0.007. Among the MSD positive group, 4 (2%) had a grand score of 5, 10 (5%) had a grand score of 6 and 186 (93%) had a grand score of 7 on RULA. There were significant differences between the non-MSD and MSD group on five out of the seven psychosocial domains, namely job demand, job monotony, co-worker support, decision control and family and environment domains. Discussion: The current cross-sectional study demonstrates a high prevalence of MSD among assembly line works with inherent physical and psychosocial risk factors and recommends that not only physical risk factors, addressing psychosocial risk factors through proper ergonomic means is also essential to the well-being of the employee.

Keywords: musculoskeletal disorders, India, occupational health, Rapid Upper Limb Assessment (RULA)

Procedia PDF Downloads 344
6793 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 339
6792 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 379
6791 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 215
6790 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 99
6789 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 525
6788 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education

Authors: Erica Lima

Abstract:

The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.

Keywords: cyberspace, teaching translation, translator education, university

Procedia PDF Downloads 385
6787 Investigation of the Corroded Steel Beam

Authors: Hesamaddin Khoshnoodi, Ahmad Rahbar Ranji

Abstract:

Corrosion in steel structures is one of the most important issues that should be considered in designing and constructing. Corrosion reduces the cross section and load capacity of element and leads to costly damage of structures. In this paper, the corrosion has been modeled for moment stresses. Moreover, the steel beam has been modeled using ABAQUS advanced finite element software. The conclusions of this study demonstrated that the displacement of the analyzed composite steel girder bridge might increase.

Keywords: Abaqus, Corrosion, deformation, Steel Beam

Procedia PDF Downloads 344
6786 Magnetic Chloromethylated Polymer Nanocomposite for Selective Pollutant Removal

Authors: Fabio T. Costa, Sergio E. Moya, Marcelo H. Sousa

Abstract:

Nanocomposites designed by embedding magnetic nanoparticles into a polymeric matrix stand out as ideal magnetic-hybrid and magneto-responsive materials as sorbents for removal of pollutants in environmental applications. Covalent coupling is often desired for the immobilization of species on these nanocomposites, in order to keep them permanently bounded, not desorbing or leaching over time. Moreover, unwanted adsorbates can be separated by successive washes/magnetic separations, and it is also possible to recover the adsorbate covalently bound to the nanocomposite surface through detaching/cleavage protocols. Thus, in this work, we describe the preparation and characterization of highly-magnetizable chloromethylated polystyrene-based nanocomposite beads for selective covalent coupling in environmental applications. For synthesis optimization, acid resistant core-shelled maghemite (γ-Fe₂O₃) nanoparticles were coated with oleate molecules and directly incorporated into the organic medium during a suspension polymerization process. Moreover, the cross-linking agent ethylene glycol dimethacrylate (EGDMA) was utilized for co-polymerization with the 4-vinyl benzyl chloride (VBC) to increase the resistance of microbeads against leaching. After characterizing samples with XRD, ICP-OES, TGA, optical, SEM and TEM microscopes, a magnetic composite consisting of ~500 nm-sized cross-linked polymeric microspheres embedding ~8 nm γ-Fe₂O₃ nanoparticles was verified. This nanocomposite showed large room temperature magnetization (~24 emu/g) due to the high content in maghemite (~45 wt%) and resistance against leaching even in acidic media. Moreover, the presence of superficial chloromethyl groups, probed by FTIR and XPS spectroscopies and confirmed by an amination test can selectively adsorb molecules through the covalent coupling and be used in molecular separations as shown for the selective removal of 4-aminobenzoic acid from a mixture with benzoic acid.

Keywords: nanocomposite, magnetic nanoparticle, covalent separation, pollutant removal

Procedia PDF Downloads 106
6785 Impact of PV Distributed Generation on Loop Distribution Network at Saudi Electricity Company Substation in Riyadh City

Authors: Mohammed Alruwaili‬

Abstract:

Nowadays, renewable energy resources are playing an important role in replacing traditional energy resources such as fossil fuels by integrating solar energy with conventional energy. Concerns about the environment led to an intensive search for a renewable energy source. The Rapid growth of distributed energy resources will have prompted increasing interest in the integrated distributing network in the Kingdom of Saudi Arabia next few years, especially after the adoption of new laws and regulations in this regard. Photovoltaic energy is one of the promising renewable energy sources that has grown rapidly worldwide in the past few years and can be used to produce electrical energy through the photovoltaic process. The main objective of the research is to study the impact of PV in distribution networks based on real data and details. In this research, site survey and computer simulation will be dealt with using the well-known computer program software ETAB to simulate the input of electrical distribution lines with other variable inputs such as the levels of solar radiation and the field study that represent the prevailing conditions and conditions in Diriah, Riyadh region, Saudi Arabia. In addition, the impact of adding distributed generation units (DGs) to the distribution network, including solar photovoltaic (PV), will be studied and assessed for the impact of adding different power capacities. The result has been achieved with less power loss in the loop distribution network from the current condition by more than 69% increase in network power loss. However, the studied network contains 78 buses. It is hoped from this research that the efficiency, performance, quality and reliability by having an enhancement in power loss and voltage profile of the distribution networks in Riyadh City. Simulation results prove that the applied method can illustrate the positive impact of PV in loop distribution generation.

Keywords: renewable energy, smart grid, efficiency, distribution network

Procedia PDF Downloads 135
6784 Involvement of Community Pharmacists in Public Health Services in Asir Region, Saudi Arabia: A Cross-Sectional Study

Authors: Mona Almanasef, Dalia Almaghaslah, Geetha Kandasamy, Rajalakshimi Vasudevan, Sadia Batool

Abstract:

Background: Community pharmacists are one of the most accessible healthcare practitioners worldwide and their services are used by a large proportion of the population. Expanding the roles of community pharmacists could contribute to reducing pressure on general health practice and other areas of health services. This research aimed to evaluate the contribution of community pharmacists in the provision of public health services and to investigate the perceived barriers to the provision of these services in Saudi Arabia. Materials and Methods: This study followed a cross-sectional design using an online anonymous self-administered questionnaire. The study took place in the Asir region, Saudi Arabia, between September 2019 and February 2020. A convenience sampling strategy was used to select and recruit the study participants. The questionnaire was adapted from previous research and involved three sections: demographics, involvement in public health services and barriers to practicing public health roles. Results: The total number of respondents was 193. The proportion of respondents who reported that they were “very involved” or “involved” in each service was 61.7% for weight management, 60.6% for sexual health, 57.5% for healthy eating, 53.4% for physical activity promotion, 51.3% for dental health, 46.1% for smoking cessation, 39.4% for screening for diabetes, 35.7% for screening for hypertension, 31.1% for alcohol dependence and drug misuse counseling, 30.6% for screening for dyslipidaemia, and 21.8% for vaccination and immunization. Most of the barriers in the current research were rated as having low relevance to the provision of public health services. Conclusion: Findings in the current research suggest that community pharmacists in the Asir region have varying levels of involvement in public health roles. Further research needs to be undertaken to understand the barriers to the provision of public health services and what strategies would be beneficial for enhancing the public health role of community pharmacists in Saudi Arabia.

Keywords: community pharmacist, public health, Asir region, Saudi Arabia

Procedia PDF Downloads 94