Search results for: input dealers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2211

Search results for: input dealers

381 An Overview of the Wind and Wave Climate in the Romanian Nearshore

Authors: Liliana Rusu

Abstract:

The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.

Keywords: numerical simulations, Romanian nearshore, waves, wind

Procedia PDF Downloads 344
380 Numerical Study of Piled Raft Foundation Under Vertical Static and Seismic Loads

Authors: Hamid Oumer Seid

Abstract:

Piled raft foundation (PRF) is a union of pile and raft working together through the interaction of soil-pile, pile-raft, soil-raft and pile-pile to provide adequate bearing capacity and controlled settlement. A uniform pile positioning is used in PRF; however, there is a wide room for optimization through parametric study under vertical load to result in a safer and economical foundation. Addis Ababa is found in seismic zone 3 with a peak ground acceleration (PGA) above the threshold of damage, which makes investigating the performance of PRF under seismic load considering the dynamic kinematic soil structure interaction (SSI) vital. The study area is located in Addis Ababa around Mexico (commercial bank) and Kirkos (Nib, Zemen and United Bank) in which input parameters (pile length, pile diameter, pile spacing, raft area, raft thickness and load) are taken. A finite difference-based numerical software, FLAC3D V6, was used for the analysis. The Kobe (1995) and Northridge (1994) earthquakes were selected, and deconvolution analysis was done. A close load sharing between pile and raft was achieved at a spacing of 7D with different pile lengths and diameters. The maximum settlement reduction achieved is 9% for a pile of 2m diameter by increasing length from 10m to 20m, which shows pile length is not effective in reducing settlement. The installation of piles results in an increase in the negative bending moment of the raft compared with an unpiled raft. Hence, the optimized design depends on pile spacing and the raft edge length, while pile length and diameter are not significant parameters. An optimized piled raft configuration (𝐴𝐺/𝐴𝑅 = 0.25 at the center and piles provided around the edge) has reduced pile number by 40% and differential settlement by 95%. The dynamic analysis shows acceleration plot at the top of the piled raft has PGA of 0.25𝑚2/𝑠𝑒𝑐 and 0.63𝑚2/𝑠𝑒𝑐 for Northridge (1994) and Kobe (1995) earthquakes, respectively, due to attenuation of seismic waves. Pile head displacement (maximum is 2mm, and it is under the allowable limit) is affected by the PGA rather than the duration of an earthquake. End bearing and friction PRF performed similarly under two different earthquakes except for their vertical settlement considering SSI. Hence, PRF has shown adequate resistance to seismic loads.

Keywords: FLAC3D V6, earthquake, optimized piled raft foundation, pile head department

Procedia PDF Downloads 26
379 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 218
378 Reasons to Redesign: Teacher Education for a Brighter Tomorrow

Authors: Deborah L. Smith

Abstract:

To review our program and determine the best redesign options, department members gathered feedback and input through focus groups, analysis of data, and a review of the current research to ensure that the changes proposed were not based solely on the state’s new professional standards. In designing course assignments and assessments, we listened to a variety of constituents, including students, other institutions of higher learning, MDE webinars, host teachers, literacy clinic personnel, and other disciplinary experts. As a result, we are designing a program that is more inclusive of a variety of field experiences for growth. We have determined ways to improve our program by connecting academic disciplinary knowledge, educational psychology, and community building both inside and outside the classroom for professional learning communities. The state’s release of new professional standards led my department members to question what is working and what needs improvement in our program. One aspect of our program that continues to be supported by research and data analysis is the function of supervised field experiences with meaningful feedback. We seek to expand in this area. Other data indicate that we have strengths in modeling a variety of approaches such as cooperative learning, discussions, literacy strategies, and workshops. In the new program, field assignments will be connected to multiple courses, and efforts to scaffold student learning to guide them toward best evidence-based practices will be continuous. Despite running a program that meets multiple sets of standards, there are areas of need that we directly address in our redesign proposal. Technology is ever-changing, so it’s inevitable that improving digital skills is a focus. In addition, scaffolding procedures for English Language Learners (ELL) or other students who struggle is imperative. Diversity, equity, and inclusion (DEI) has been an integral part of our curriculum, but the research indicates that more self-reflection and a deeper understanding of culturally relevant practices would help the program improve. Connections with professional learning communities will be expanded, as will leadership components, so that teacher candidates understand their role in changing the face of education. A pilot program will run in academic year 22/23, and additional data will be collected each semester through evaluations and continued program review.

Keywords: DEI, field experiences, program redesign, teacher preparation

Procedia PDF Downloads 169
377 Software User Experience Enhancement through User-Centered Design and Co-design Approach

Authors: Shan Wang, Fahad Alhathal, Hari Subramanian

Abstract:

User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023 in the UK; it aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight co-design workshops with a diverse group of 11 individuals. Throughout these co-design workshops, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement within three insights. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.

Keywords: user experiences design, user centered design, co-design approach, knowledge management tool

Procedia PDF Downloads 9
376 Pragmatic Development of Chinese Sentence Final Particles via Computer-Mediated Communication

Authors: Qiong Li

Abstract:

This study investigated in which condition computer-mediated communication (CMC) could promote pragmatic development. The focal feature included four Chinese sentence final particles (SFPs), a, ya, ba, and ne. They occur frequently in Chinese, and function as mitigators to soften the tone of speech. However, L2 acquisition of SFPs is difficult, suggesting the necessity of additional exposure to or explicit instruction on Chinese SFPs. This study follows this line and aims to explore two research questions: (1) Is CMC combined with data-driven instruction more effective than CMC alone in promoting L2 Chinese learners’ SFP use? (2) How does L2 Chinese learners’ SFP use change over time, as compared to the production of native Chinese speakers? The study involved 19 intermediate-level learners of Chinese enrolled at a private American university. They were randomly assigned to two groups: (1) the control group (N = 10), which was exposed to SFPs through CMC alone, (2) the treatment group (N = 9), which was exposed to SFPs via CMC and data-driven instruction. Learners interacted with native speakers on given topics through text-based CMC over Skype. Both groups went through six 30-minute CMC sessions on a weekly basis, with a one-week interval after the first two CMC sessions and a two-week interval after the second two CMC sessions (nine weeks in total). The treatment group additionally received a data-driven instruction after the first two sessions. Data analysis focused on three indices: token frequency, type frequency, and acceptability of SFP use. Token frequency was operationalized as the raw occurrence of SFPs per clause. Type frequency was the range of SFPs. Acceptability was rated by two native speakers using a rating rubric. The results showed that the treatment group made noticeable progress over time on the three indices. The production of SFPs approximated the native-like level. In contrast, the control group only slightly improved on token frequency. Only certain SFPs (a and ya) reached the native-like use. Potential explanations for the group differences were discussed in two aspects: the property of Chinese SFPs and the role of CMC and data-driven instruction. Though CMC provided the learners with opportunities to notice and observe SFP use, as a feature with low saliency, SFPs were not easily noticed in input. Data-driven instruction in the treatment group directed the learners’ attention to these particles, which facilitated the development.

Keywords: computer-mediated communication, data-driven instruction, pragmatic development, second language Chinese, sentence final particles

Procedia PDF Downloads 418
375 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers

Authors: Raúl Jiménez Vilches

Abstract:

Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.

Keywords: focus, prosody, public speaking, Spanish learners of English

Procedia PDF Downloads 99
374 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 357
373 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 465
372 Applying the Underwriting Technique to Analyze and Mitigate the Credit Risks in Construction Project Management

Authors: Hai Chien Pham, Thi Phuong Anh Vo, Chansik Park

Abstract:

Risks management in construction projects is important to ensure the positive feasibility of the projects in which financial risks are most concerned while construction projects always run on a credit basis. Credit risks, therefore, require unique and technical tools to be well managed. Underwriting technique in credit risks, in its most basic sense, refers to the process of evaluating the risks and the potential exposure of losses. Risks analysis and underwriting are applied as a must in banks and financial institutions who are supporters for constructions projects when required. Recently, construction organizations, especially contractors, have recognized the significant increasing of credit risks which caused negative impacts to project performance and profit of construction firms. Despite the successful application of underwriting in banks and financial institutions for many years, there are few contractors who are applying this technique to analyze and mitigate the credit risks of their potential owners before signing contracts with them for delivering their performed services. Thus, contractors have taken credit risks during project implementation which might be not materialized due to the bankruptcy and/or protracted default made by their owners. With this regard, this study proposes a model using the underwriting technique for contractors to analyze and assess credit risks of their owners before making final decisions for the potential construction contracts. Contractor’s underwriters are able to analyze and evaluate the subjects such as owner, country, sector, payment terms, financial figures and their related concerns of the credit limit requests in details based on reliable information sources, and then input into the proposed model to have the Overall Assessment Score (OAS). The OAS is as a benchmark for the decision makers to grant the proper limits for the project. The proposed underwriting model is validated by 30 subjects in Asia Pacific region within 5 years to achieve their OAS, and then compare output OAS with their own practical performance in order to evaluate the potential of underwriting model for analyzing and assessing credit risks. The results revealed that the underwriting would be a powerful method to assist contractors in making precise decisions. The contribution of this research is to allow the contractors firstly to develop their own credit risk management model for proactively preventing the credit risks of construction projects and continuously improve and enhance the performance of this function during project implementation.

Keywords: underwriting technique, credit risk, risk management, construction project

Procedia PDF Downloads 208
371 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 371
370 Beliefs, Practices and Identity about Bilingualism: Korean-australian Immigrant Parents and Family Language Policies

Authors: Eun Kyong Park

Abstract:

This study explores the relationships between immigrant parents’ beliefs about bilingualism, family literacy practices, and their children’s identity development in Sydney, Australia. This project examines how these parents’ ideological beliefs and knowledge are related to their provision of family literacy practices and management of the environment for their bilingual children based on family language policy (FLP). This is a follow-up study of the author’s prior thesis that presented Korean immigrant mothers’ beliefs and decision-making in support of their children’s bilingualism. It includes fathers’ perspectives within the participating families as a whole by foregrounding their perceptions of bilingual and identity development. It adopts a qualitative approach with twelve immigrant mothers and fathers living in a Korean-Australian community whose child attends one of the communities Korean language programs. This time, it includes introspective and self-evocative auto-ethnographic data. The initial data set collected from the first part of this study demonstrated the mothers provided rich, diverse, and specific family literacy activities for their children. These mothers selected specific practices to facilitate their child’s bilingual development at home. The second part of data has been collected over a three month period: 1) a focus group interview with mothers; 2) a brief self-report of fathers; 3) the researcher’s reflective diary. To analyze these multiple data, thematic analysis and coding were used to reveal the parents’ ideologies surrounding bilingualism and bilingual identities. It will highlight the complexity of language and literacy practices in the family domain interrelated with sociocultural factors. This project makes an original contribution to the field of bilingualism and FLP and a methodological contribution by introducing auto-ethnographic input of this community’s lived practices. This project will empower Korean-Australian immigrant families and other multilingual communities to reflect their beliefs and practices for their emerging bilingual children. It will also enable educators and policymakers to access authentic information about how bilingualism is practiced within these immigrant families in multiple ways and to help build the culturally appropriate partnership between home and school community.

Keywords: bilingualism, beliefs, identity, family language policy, Korean immigrant parents in Australia

Procedia PDF Downloads 136
369 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 280
368 Effect of Human Resources Accounting on Financial Performance of Banks in Nigeria

Authors: Oti Ibiam, Alexanda O. Kalu

Abstract:

Human Resource Accounting is the process of identifying and measuring data about human resources and communicating this information to interested parties in order to meaningful investment decisions. In recent time, firms focus has shifted to human resource accounting so as to ensure efficiency and effectiveness in their operations. This study focused on the effect of human resource accounting on the financial performance of Banks in Nigerian. The problem that led to the study revolves around the current trend whereby Nigeria banks do not efficiently account for the input of human resource in their annual statement, thereby instead of capitalizing human resources in their statement of financial position; they expend it in their income statement thereby reducing their profit after tax. The broad objective of this study is to determine the extent to which human resource accounting affects the financial performance and value of Nigerian Banks. This study is therefore considered significant because, there are still universally, grey areas to be sorted out on the subject matter of human resources accounting. In the bid to achieve the study objectives, the researcher gathered data from sixteen commercial banks. Data were collected from both primary and secondary sources using an ex-post facto research design. The data collected were then tabulated and analyzed using the multiple regression analysis. The result of hypothesis one revealed that there is a significant relationship between Capitalized Human Resource Cost and post capitalization Profit before tax of banks in Nigeria. The finding of hypothesis two revealed that the association between Capitalized Human Resource Cost and post capitalization Net worth of banks in Nigeria is significant. The finding in Hypothesis three reveals that there is a significant difference between pre and post capitalization profit before tax of banks in Nigeria. The study concludes that human resources accounting positively influenced financial performance of banks in Nigeria within the period under study. It is recommended that standards should be set for human resources identification and measurement in the banking sector and also the management of commercial banks in Nigeria should have a proper appreciation of human resource accounting. This will enable managers to take right decision regarding investment in human resource. Also, the study recommends that policies on enhancing the post capitalization profit before tax of banks in Nigeria should pay great attention to capitalized human resources cost, net worth and total asset as the variables significantly influenced post capitalization profit before tax of the studied banks in Nigeria. The limitation of the study centers on the limited number of years and companies that was adopted for the study.

Keywords: capitalization, human resources cost, profit before tax, net worth

Procedia PDF Downloads 150
367 Predicting Daily Patient Hospital Visits Using Machine Learning

Authors: Shreya Goyal

Abstract:

The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.

Keywords: machine learning, SVM, HIPAA, data

Procedia PDF Downloads 65
366 Avoidance and Selectivity in the Acquisition of Arabic as a Second/Foreign Language

Authors: Abeer Heider

Abstract:

This paper explores and classifies the different kinds of avoidances that students commonly make in the acquisition of Arabic as a second/foreign language, and suggests specific strategies to help students lessen their avoidance trends in hopes of streamlining the learning process. Students most commonly use avoidance strategies in grammar, and word choice. These different types of strategies have different implications and naturally require different approaches. Thus the question remains as to the most effective way to help students improve their Arabic, and how teachers can efficiently utilize these techniques. It is hoped that this research will contribute to understand the role of avoidance in the field of the second language acquisition in general, and as a type of input. Yet some researchers also note that similarity between L1 and L2 may be problematic as well since the learner may doubt that such similarity indeed exists and consequently avoid the identical constructions or elements (Jordens, 1977; Kellermann, 1977, 1978, 1986). In an effort to resolve this issue, a case study is being conducted. The present case study attempts to provide a broader analysis of what is acquired than is usually the case, analyzing the learners ‘accomplishments in terms of three –part framework of the components of communicative competence suggested by Michele Canale: grammatical competence, sociolinguistic competence and discourse competence. The subjects of this study are 15 students’ 22th year who came to study Arabic at Qatar University of Cairo. The 15 students are in the advanced level. They were complete intermediate level in Arabic when they arrive in Qatar for the first time. The study used discourse analytic method to examine how the first language affects students’ production and output in the second language, and how and when students use avoidance methods in their learning. The study will be conducted through Fall 2015 through analyzing audio recordings that are recorded throughout the entire semester. The recordings will be around 30 clips. The students are using supplementary listening and speaking materials. The group will be tested at the end of the term to assess any measurable difference between the techniques. Questionnaires will be administered to teachers and students before and after the semester to assess any change in attitude toward avoidance and selectivity methods. Responses to these questionnaires are analyzed and discussed to assess the relative merits of the aforementioned strategies to avoidance and selectivity to further support on. Implications and recommendations for teacher training are proposed.

Keywords: the second language acquisition, learning languages, selectivity, avoidance

Procedia PDF Downloads 277
365 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 167
364 Hydrogeochemical Investigation of Lead-Zinc Deposits in Oshiri and Ishiagu Areas, South Eastern Nigeria

Authors: Christian Ogubuchi Ede, Moses Oghenenyoreme Eyankware

Abstract:

This study assessed the concentration of heavy metals (HMs) in soil, rock, mine dump pile, and water from Oshiri and Ishiagu areas of Ebonyi State. Investigations on mobile fraction equally evaluated the geochemical condition of different HM using UV spectrophotometer for Mineralized and unmineralized rocks, dumps, and soil, while AAS was used in determining the geochemical nature of the water system. Analysis revealed very high pollution of Cd mostly in Ishiagu (Ihetutu and Amaonye) active mine zones and with subordinates enrichments of Pb, Cu, As, and Zn in Amagu and Umungbala. Oshiri recorded sparingly moderate to high contamination of Cd and Mn but out rightly high anthropogenic input. Observation showed that most of the contamination conditions were unbearable while at the control but decrease with increasing distance from the mine vicinity. The potential heavy metal risk of the environments was evaluated using the risk factors such as enrichment factor, index of Geoacumulation, Contamination Factor, and Effect Range Median. Cadmium and Zn showed moderate to extreme contamination using Geoaccumulation Index (Igeo) while Pb, Cd, and As indicated moderate to strong pollution using the Effect Range Median. Results, when compared with the allowable limits and standards, showed the concentration of the metals in the following order Cd>Zn>Pb>As>Cu>Ni (rocks), Cd>As>Pb>Zn>Cu>Ni (soil) while Cd>Zn>As>Pb> Cu (for mine dump pile. High concentrations of Zn and As were recorded more in mine pond and salt line/drain channels along active mine zones, it heightened its threat during the rainy period as it settles into river course, living behind full-scale contaminations to inhabitants depending on it for domestic uses. Pb and Cu with moderate pollution were recorded in surface/stream water source as its mobility were relatively low. Results from Ishiagu Crush rock sites and Fedeco metallurgical and auto workshop where groundwater contamination was seen infiltrating some of the wells points gave rise to values that were 4 times high than the allowable limits. Some of these metal concentrations according to WHO (2015) if left unmitigated pose adverse effects to the soil and human community.

Keywords: water, geo-accumulation, heavy metals, mine and Nigeria.

Procedia PDF Downloads 170
363 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator

Authors: Gómez R. Marta, Martín M. Jesús María

Abstract:

The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.

Keywords: atmospheric dispersion, dioxin, furan, incinerator

Procedia PDF Downloads 217
362 Ionometallurgy for Recycling Silver in Silicon Solar Panel

Authors: Emmanuel Billy

Abstract:

This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.

Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver

Procedia PDF Downloads 247
361 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries

Procedia PDF Downloads 618
360 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 321
359 Internationalization of Higher Education in Malaysia-Rationale for Global Citizens

Authors: Irma Wani Othman

Abstract:

The internationalization of higher education in Malaysia mainly focuses to place the implementation of the strategic, comprehensive and integrated range of stakeholders in order to highlight the visibility of Malaysia as a hub of academic excellence. While the concept of 'global citizenship' is used as a two-pronged strategy of aggressive marketing by universities which includes; (i) the involvement of the academic expatriates in stimulating international activities of higher education and (ii) an increase in international student enrollment capacity for the enculturation of science and the development of first class mentality. In this aspect, aspirations for a transnational social movement through global citizenship status to establish the identity of the university community without borders (borderless universities) - regardless of skin colour, thus rationalize and liberalize the universal principles of life and cultural traditions of a nation. The education system earlier referred by the spirit of nationalism is now progressing due to globalization, hence forming a system of higher education that is relevant and generated by the need of all time. However, debates arose when the involvement of global citizenship is said to threaten the ultimate university autonomy in determining the direction of academic affairs and governance of their human resources. Stemming from this debate, this study aims to explore the experience of 'global citizenship' that the academic expatriates and international students in shaping the university's strategic needs and interests which are in line with the transition of contemporary higher education. The objective of this study is to examine the acculturation experience of the global citizen in the form of transnational higher education system and suggest policy and policing IHE which refers directly to the experience of the global citizen. This study offers a detailed understanding of how the university communities assess their expatriation experience, thus becoming useful information for learning and transforming education. The findings also open an advanced perspective on the international mobility of human resources and the implications on the implementation of the policy of internationalization of higher education. The contribution of this study is expected to give new input, thus shift the focus of contextual literature for the internationalization of the education system. Instead of focusing on the purpose of generating income of a university, to a greater understanding of subjective experience in utilizing international human resources hence contributing to the prominent transnational character of higher education.

Keywords: internationalization, global citizens, Malaysia higher education, academic expatriate, international students

Procedia PDF Downloads 313
358 Electronic Six-Minute Walk Test (E-6MWT): Less Manpower, Higher Efficiency, and Better Data Management

Authors: C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, T. K. Chui, L. Y. Chan, K. W. Lee, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow

Abstract:

Six-minute walk test (6MWT) is a sub-maximal exercise test to assess aerobic capacity and exercise tolerance of patients with chronic respiratory disease and heart failure. This has been proven to be a reliable and valid tool and commonly used in clinical situations. Traditional 6MWT is labour-intensive and time-consuming especially for patients who require assistance in ambulation and oxygen use. When performing the test with these patients, one staff will assist the patient in walking (with or without aids) while another staff will need to manually record patient’s oxygen saturation, heart rate and walking distance at every minute and/or carry oxygen cylinder at the same time. Physiotherapist will then have to document the test results in bed notes in details. With the use of electronic 6MWT (E-6MWT), patients wear a wireless oximeter that transfers data to a tablet PC via Bluetooth. Real-time recording of oxygen saturation, heart rate, and distance are displayed. No manual work on recording is needed. The tablet will generate a comprehensive report which can be directly attached to the patient’s bed notes for documentation. Data can also be saved for later patient follow up. This study was carried out in North District Hospital. Patients who followed commands and required 6MWT assessment were included. Patients were assigned to study or control groups. In the study group, patients adopted the E-6MWT while those in control group adopted the traditional 6MWT. Manpower and time consumed were recorded. Physiotherapists also completed a questionnaire about the use of E-6MWT. Total 12 subjects (Study=6; Control=6) were recruited during 11-12/2017. An average number of staff required and time consumed in traditional 6MWT were 1.67 and 949.33 seconds respectively; while in E-6MWT, the figures were 1.00 and 630.00 seconds respectively. Compared to traditional 6MWT, E-6MWT required 67.00% less manpower and 50.10% less in time spent. Physiotherapists (n=7) found E-6MWT is convenient to use (mean=5.14; satisfied to very satisfied), requires less manpower and time to complete the test (mean=4.71; rather satisfied to satisfied), has better data management (mean=5.86; satisfied to very satisfied) and is recommended to be used clinically (mean=5.29; satisfied to very satisfied). It is proven that E-6MWT requires less manpower input with higher efficiency and better data management. It is welcomed by the clinical frontline staff.

Keywords: electronic, physiotherapy, six-minute walk test, 6MWT

Procedia PDF Downloads 154
357 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests

Authors: Md. Kausar Alam, Ramin Motamed

Abstract:

The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.

Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction

Procedia PDF Downloads 97
356 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 128
355 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 393
354 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 431
353 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 243
352 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212