Search results for: perceived exit performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14696

Search results for: perceived exit performance

8006 Improving Fluid Catalytic Cracking Unit Performance through Low Cost Debottlenecking

Authors: Saidulu Gadari, Manoj Kumar Yadav, V. K. Satheesh, Debasis Bhattacharyya, S. S. V. Ramakumar, Subhajit Sarkar

Abstract:

Most Fluid Catalytic Cracking Units (FCCUs) are big profit makers and hence, always operated with several constraints. It is the primary source for production of gasoline, light olefins as petrochemical feedstocks, feedstock for alkylate & oxygenates, LPG, etc. in a refinery. Increasing unit capacity and improving product yields as well as qualities such as gasoline RON have dramatic impact on the refinery economics. FCCUs are often debottlenecked significantly beyond their original design capacities. Depending upon the unit configuration, operating conditions, and feedstock quality, the FCC unit can have a variety of bottlenecks. While some of these are aimed to increase the feed rate, improve the conversion, etc., the others are aimed to improve the reliability of the equipment or overall unit. Apart from investment cost, the other factors considered generally while evaluating the debottlenecking options are shutdown days, faster payback, risk on investment, etc. A low-cost solution such as replacement of feed injectors, air distributor, steam distributors, spent catalyst distributor, efficient cyclone system, etc. are the preferred way of upgrading FCCU. It also has lower lead time from idea inception to implementation. This paper discusses various bottlenecks generally encountered in FCCU and presents a case study on improvement of performance of one of the FCCUs in IndianOil through implementation of cost-effective technical solution including use of improved internals in Reactor-Regeneration (R-R) section. After implementation reduction in regenerator air, gas superficial velocity in regenerator and cyclone velocities by about 10% and improvement of CLO yield from 10 to 6 wt% have been achieved. By ensuring proper pressure balance and optimum immersion of cyclone dipleg in the standpipe, frequent formation of perforations in regenerator cyclones could be addressed which in turn improved the unit on-stream factor.

Keywords: FCC, low-cost, revamp, debottleneck, internals, distributors, cyclone, dipleg

Procedia PDF Downloads 215
8005 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 208
8004 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 139
8003 Plant Layout Analysis by Computer Simulation for Electronic Manufacturing Service Plant

Authors: D. Visuwan, B. Phruksaphanrat

Abstract:

In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyse and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyse and compare the performance of the proposed cellular layout and the current layout. It is found that the proposed cellular layout can generate better performances than the current layout. In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyse and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyse and compare the performance of the proposed cellular layout and the current layout. It found that the proposed cellular layout can generate better performances than the current layout.

Keywords: layout, electronic manufacturing service plant, computer simulation, cellular manufacturing system

Procedia PDF Downloads 308
8002 Phenolic Content and Antioxidant Potential of Selected Nigerian Herbs and Spices: A Justification for Consumption and Use in the Food Industry

Authors: Amarachi Delight Onyemachi, Gregory Ikechukwu Onwuka

Abstract:

The growing consumer trend for natural ingredients, functional foods with health benefits and the perceived risk of carcinogenesis associated with synthetic antioxidants have forced food manufacturers to look for alternatives for producing healthy and safe food. Herbs and spices are cheap, natural and harmless sources of antioxidants which can delay and prevent lipid oxidation of food products and also confer its unique organoleptic properties and health benefits to food products. The Nigerian climate has been proven to be conducive for the production of spices and herbs and is blessed bountifully with a wide range of them. Five selected Nigerian herbs and spices Piper guieense, Xylopia aethopica, Gongronema latifolium and Ocimum gratissimum were evaluated for their ability to act as radical scavengers. The spices were extracted with 80% ethanol and evaluated using total phenolic capacity (TPC), DPPH (1,1-diph diphenyl-2-picrylhydrazyl radical) ABTS (2,2’azinobis-(3-ethylbenzthiazoline-6-sulfonic acid)), total antioxidant capacity (TAC), reducing power (RP) assays. The TPC ranged from 5.33 µg GAE/mg (in Gongronema latifolium) to 15.55 µg GAE/mg (in Ocimum gratissimum). The DPPH and ABTS scavenging activity of the extracts ranged from 0.23-0.36 IC50 mg/ml and 2.32-7.25 Trolox equivalent % respectively. The TAC and RP of the extract ranged from 6.73-10.64 µg AAE/mg and 3.52-10.19 µg AAE/mg. The result of percentage yield of the extract ranged from as low as 9.94% in Gongronema latifolium and to as high as 23.85% in Xylopia aethopica. A very strong positive relationship existed between the total antioxidant capacity and total phenolic content of the tested herbs and spices (R2=0.96). All of the extracts exhibited different extent of strong antioxidant activity, high antioxidant activity was found in Ocimum gratissimum and Gongronema latifolium with the least. However, Gongronema latifolium possessed the highest total antioxidant capacity. These data confirm the appreciable antioxidant potentials and high phenolic content of Nigerian herbs and spices, thereby providing justification for their use in dishes and functional foods, prevention of cellular damage caused by free radicals and use as natural antioxidants in the food industry for prevention of lipid oxidation in food products. However, to utilize these natural antioxidants in food products, further analysis and studies of their behaviour in food systems at varying temperature, pH conditions and ionic concentrations should be carried out to displace the use of synthetic antioxidants like BHT and BHA.

Keywords: Antioxidant, free radicals, herbs, phenolic, spices

Procedia PDF Downloads 256
8001 Measuring the Biomechanical Effects of Worker Skill Level and Joystick Crane Speed on Forestry Harvesting Performance Using a Simulator

Authors: Victoria L. Chester, Usha Kuruganti

Abstract:

The forest industry is a major economic sector of Canada and also one of the most dangerous industries for workers. The use of mechanized mobile forestry harvesting machines has successfully reduced the incidence of injuries in forest workers related to manual labor. However, these machines have also created additional concerns, including a high machine operation learning curve, increased the length of the workday, repetitive strain injury, cognitive load, physical and mental fatigue, and increased postural loads due to sitting in a confined space. It is critical to obtain objective performance data for employers to develop appropriate work practices for this industry, however ergonomic field studies of this industry are lacking mainly due to the difficulties in obtaining comprehensive data while operators are cutting trees in the woods. The purpose of this study was to establish a measurement and experimental protocol to examine the effects of worker skill level and movement training speed (joystick crane speed) on harvesting performance using a forestry simulator. A custom wrist angle measurement device was developed as part of the study to monitor Euler angles during operation of the simulator. The device of the system consisted of two accelerometers, a Bluetooth module, three 3V coin cells, a microcontroller, a voltage regulator and an application software. Harvesting performance and crane data was provided by the simulator software and included tree to frame collisions, crane to tree collisions, boom tip distance, number of trees cut, etc. A pilot study of 3 operators with various skill levels was tested to identify factors that distinguish highly skilled operators from novice or intermediate operators. Dependent variables such as reaction time, math skill, past work experience, training movement speed (e.g. joystick control speeds), harvesting experience level, muscle activity, and wrist biomechanics were measured and analyzed. A 10-channel wireless surface EMG system was used to monitor the amplitude and mean frequency of 10 upper extremity muscles during pre and postperformance on the forestry harvest stimulator. The results of the pilot study showed inconsistent changes in median frequency pre-and postoperation, but there was the increase in the activity of the flexor carpi radialis, anterior deltoid and upper trapezius of both arms. The wrist sensor results indicated that wrist supination and pronation occurred more than flexion and extension with radial-ulnar rotation demonstrating the least movement. Overall, wrist angular motion increased as the crane speed increased from slow to fast. Further data collection is needed and will help industry partners determine those factors that separate skill levels of operators, identify optimal training speeds, and determine the length of training required to bring new operators to an efficient skill level effectively. In addition to effective and employment training programs, results of this work will be used for selective employee recruitment strategies to improve employee retention after training. Further, improved training procedures and knowledge of the physical and mental demands on workers will lead to highly trained and efficient personnel, reduced risk of injury, and optimal work protocols.

Keywords: EMG, forestry, human factors, wrist biomechanics

Procedia PDF Downloads 147
8000 Understanding the Productivity Effect on Industrial Management: The Portuguese Wood Furniture Industry Case Study

Authors: Jonas A. R. H. Lima, Maria Antonia Carravilla

Abstract:

As productivity concepts are widely related to industrial savings, it is becoming particularly important in a more and more competitive world, to really understand how productivity can be well used in industrial management techniques. Nowadays, consumers are no more willing to pay for mistakes and inefficiencies. Therefore, one way for companies to stay competitive is to control and increase their productivity. This study aims to define clearly the productivity concept, understand how a company can affect productivity, and, if possible, identify the relation between each identified productivity factor. This will help managers, by clarifying the main issues behind productivity concepts and proposing a methodology to measure, control and increase productivity. The main questions to be answered are: what is the importance of productivity for the Portuguese Wood Furniture Industry? Is it possible to control productivity internally, or is it a phenomenon external to companies, hard or even impossible to control? How to understand, control and adjust productivity performance? How to make productivity to become one main asset for maximizing the use of the available resources? This essay will follow a constructive approach mostly based in the research hypothesis mentioned above. For that, a literature review is being done to find the main conceptual frameworks and empirical studies that already exist, and by doing so, highlight eventual knowledge or conflicting research to be addressed in this work. We expect to build theoretical explanations and test theoretical predictions from participants understandings and own experiences, by elaborating field surveys and interviews, to select adjusted productivity indicators and analyze the productivity evolution according the adjustments on other variables. Its intended the conduction of an exploratory work that can simultaneous clarify productivity concepts, objectives, and define frameworks. This investigation intends to migrate from merely academic concepts to a daily basis operational reality of the companies from the Portuguese Wood Furniture Industry highlighting productivity increased importance within modern engineering and industrial management. The ambition is to clarify, systemize and develop a management tool that may not only control but positively influence the way resources are used.

Keywords: industrial management, motivation, productivity, performance indicators, reward management, wood furniture industry

Procedia PDF Downloads 229
7999 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 261
7998 Effect of Different Factors on Temperature Profile and Performance of an Air Bubbling Fluidized Bed Gasifier for Rice Husk Gasification

Authors: Dharminder Singh, Sanjeev Yadav, Pravakar Mohanty

Abstract:

In this work, study of temperature profile in a pilot scale air bubbling fluidized bed (ABFB) gasifier for rice husk gasification was carried out. Effects of different factors such as multiple cyclones, gas cooling system, ventilate gas pipe length, and catalyst on temperature profile was examined. ABFB gasifier used in this study had two sections, one is bed section and the other is freeboard section. River sand was used as bed material with air as gasification agent, and conventional charcoal as start-up heating medium in this gasifier. Temperature of different point in both sections of ABFB gasifier was recorded at different ER value and ER value was changed by changing the feed rate of biomass (rice husk) and by keeping the air flow rate constant for long durational of gasifier operation. ABFB with double cyclone with gas coolant system and with short length ventilate gas pipe was found out to be optimal gasifier design to give temperature profile required for high gasification performance in long duration operation. This optimal design was tested with different ER values and it was found that ER of 0.33 was most favourable for long duration operation (8 hr continuous operation), giving highest carbon conversion efficiency. At optimal ER of 0.33, bed temperature was found to be stable at 700 °C, above bed temperature was found to be at 628.63 °C, bottom of freeboard temperature was found to be at 600 °C, top of freeboard temperature was found to be at 517.5 °C, gas temperature was found to be at 195 °C, and flame temperature was found to be 676 °C. Temperature at all the points showed fluctuations of 10 – 20 °C. Effect of catalyst i.e. dolomite (20% with sand bed) was also examined on temperature profile, and it was found that at optimal ER of 0.33, the bed temperature got increased to 795 °C, above bed temperature got decreased to 523 °C, bottom of freeboard temperature got decreased to 548 °C, top of freeboard got decreased to 475 °C, gas temperature got decreased to 220 °C, and flame temperature got increased to 703 °C. Increase in bed temperature leads to higher flame temperature due to presence of more hydrocarbons generated from more tar cracking at higher temperature. It was also found that the use of dolomite with sand bed eliminated the agglomeration in the reactor at such high bed temperature (795 °C).

Keywords: air bubbling fluidized bed gasifier, bed temperature, charcoal heating, dolomite, flame temperature, rice husk

Procedia PDF Downloads 278
7997 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 267
7996 Colorful Ethnoreligious Map of Iraq and the Current Situation of Minorities in the Country

Authors: Meszár Tárik

Abstract:

The aim of the study is to introduce the minority groups living in Iraq and to shed light on their current situation. The Middle East is a rather heterogeneous region in ethnic terms. It includes many ethnic, national, religious, linguistic, or ethnoreligious groups. The relationship between the majority and minority is the main cause of various conflicts in the region. It seems that most of the post-Ottoman states have not yet developed a unified national identity capable of integrating their multi-ethnic societies. The issue of minorities living in the Middle East is highly politicized and controversial, as the various Arab states consider the treatment of minorities as their internal affair, do not recognize discrimination or even deny the existence of any kind of minorities on their territory. This attitude of the Middle Eastern states may also be due to the fact that the minority issue can be abused and can serve as a reference point for the intervention policies of Western countries at any time. Methodologically, the challenges of these groups are perceived through the manifestos of prominent individuals and organizations belonging to minorities. The basic aim is to present the minorities’ own history in dealing with the issue. It also introduces the different ethnic and religious minorities in Iraq and analyzes their situation during the operation of the terrorist organization „Islamic State” and in the aftermath. It is clear that the situation of these communities deteriorated significantly with the advance of ISIS, but it is also clear that even after the expulsion of the militant group, we cannot necessarily report an improvement in this area, especially in terms of the ability of minorities to assert their interests and physical security. The emergence of armed militias involved in the expulsion of ISIS sometimes has extremely negative effects on them. Until the interests of non-Muslims are adequately represented at the local level and in the legislature, most experts and advocates believe that little will change in their situation. When conflicts flare, many Iraqi citizens usually leave Iraq, but because of the poor public security situation (threats from terrorist organizations, interventions by other countries), emigration causes serious problems not only outside the country’s borders but also within the country. Another ominous implication for minorities is that their communities are very slow if ever, to return to their homes after fleeing their own settlements. An important finding of the study is that this phenomenon is changing the face of traditional Iraqi settlements and threatens to plunge groups that have lived there for thousands of years into the abyss of history. Therefore, we not only present the current situation of minorities living in Iraq but also discuss their future possibilities.

Keywords: Middle East, Iraq, Islamic State, minorities

Procedia PDF Downloads 88
7995 A Parallel Implementation of k-Means in MATLAB

Authors: Dimitris Varsamis, Christos Talagkozis, Alkiviadis Tsimpiris, Paris Mastorocostas

Abstract:

The aim of this work is the parallel implementation of k-means in MATLAB, in order to reduce the execution time. Specifically, a new function in MATLAB for serial k-means algorithm is developed, which meets all the requirements for the conversion to a function in MATLAB with parallel computations. Additionally, two different variants for the definition of initial values are presented. In the sequel, the parallel approach is presented. Finally, the performance tests for the computation times respect to the numbers of features and classes are illustrated.

Keywords: K-means algorithm, clustering, parallel computations, Matlab

Procedia PDF Downloads 385
7994 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement

Authors: Fiona Wahr, Sitalakshmi Venkatraman

Abstract:

Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.

Keywords: enabling skills, student retention, embedded learning support, continuous improvement

Procedia PDF Downloads 248
7993 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 76
7992 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors

Authors: Chen Liu, Dawit Negussey

Abstract:

EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.

Keywords: geofoam, pressure distribution, tactile pressure sensors, interface

Procedia PDF Downloads 173
7991 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 232
7990 Constructivist Grounded Theory of Intercultural Learning

Authors: Vaida Jurgile

Abstract:

Intercultural learning is one of the approaches taken to understand the cultural diversity of the modern world and to accept changes in cultural identity and otherness and the expression of tolerance. During intercultural learning, students develop their abilities to interact and communicate with their group members. These abilities help to understand social and cultural differences, to form one’s identity, and to give meaning to intercultural learning. Intercultural education recognizes that a true understanding of differences and similarities of another culture is necessary in order to lay the foundations for working together with others, which contributes to the promotion of intercultural dialogue, appreciation of diversity, and cultural exchange. Therefore, it is important to examine the concept of intercultural learning, revealed through students’ learning experiences and understanding of how this learning takes place and what significance this phenomenon has in higher education. At a scientific level, intercultural learning should be explored in order to uncover the influence of cultural identity, i.e., intercultural learning should be seen in a local context. This experience would provide an opportunity to learn from various everyday intercultural learning situations. Intercultural learning can be not only a form of learning but also a tool for building understanding between people of different cultures. The research object of the study is the process of intercultural learning. The aim of the dissertation is to develop a grounded theory of the process of learning in an intercultural study environment, revealing students’ learning experiences. The research strategy chosen in this study is a constructivist grounded theory (GT). GT is an inductive method that seeks to form a theory by applying the systematic collection, synthesis, analysis, and conceptualization of data. The targeted data collection was based on the analysis of data provided by previous research participants, which revealed the need for further research participants. During the research, only students with at least half a year of study experience, i.e., who have completed at least one semester of intercultural studies, were purposefully selected for the research. To select students, snowballing sampling was used. 18 interviews were conducted with students representing 3 different fields of sciences (social sciences, humanities, and technology sciences). In the process of intercultural learning, language expresses and embodies cultural reality and a person’s cultural identity. It is through language that individual experiences are expressed, and the world in which Others exist is perceived. The increased emphasis is placed on the fact that language conveys certain “signs’ of communication and perception with cultural value, enabling the students to identify the Self and the Other. Language becomes an important tool in the process of intercultural communication because it is only through language that learners can communicate, exchange information, and understand each other. Thus, in the process of intercultural learning, language either promotes interpersonal relationships with foreign students or leads to mutual rejection.

Keywords: intercultural learning, grounded theory, students, other

Procedia PDF Downloads 67
7989 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 75
7988 The Interaction of Lay Judges and Professional Judges in French, German and British Labour Courts

Authors: Susan Corby, Pete Burgess, Armin Hoeland, Helene Michel, Laurent Willemez

Abstract:

In German 1st instance labour courts, lay judges always sit with a professional judge and in British and French 1st instance labour courts, lay judges sometimes sit with a professional judge. The lay judges’ main contribution is their workplace knowledge, but they act in a juridical setting where legal norms prevail. Accordingly, the research question is: does the professional judge dominate the lay judges? The research, funded by the Hans-Böckler-Stiftung, is based on over 200 qualitative interviews conducted in France, Germany and Great Britain in 2016-17 with lay and professional judges. Each interview lasted an hour on average, was audio-recorded, transcribed and then analysed using MaxQDA. Status theories, which argue that external sources of (perceived) status are imported into the court, and complementary notions of informational advantage suggest professional judges might exercise domination and control. Furthermore, previous empirical research on British and German labour courts, now some 30 years old, found that professional judges dominated. More recent research on lay judges and professional judges in criminal courts also found professional judge domination. Our findings, however, are more nuanced and distinguish between the hearing and deliberations, and also between the attitudes of judges in the three countries. First, in Germany and Great Britain the professional judge has specialist knowledge and expertise in labour law. In contrast, French professional judges do not study employment law and may only seldom adjudicate on employment law cases. Second, although the professional judge chairs and controls the hearing when he/she sits with lay judges in all three countries, exceptionally in Great Britain lay judges have some latent power as they have to take notes systematically due to the lack of recording technology. Such notes can be material if a party complains of bias, or if there is an appeal. Third, as to labour court deliberations: in France, the professional judge alone determines the outcome of the case, but only if the lay judges have been unable to agree at a previous hearing, which only occurs in 20% of cases. In Great Britain and Germany, although the two lay judges and the professional judge have equal votes, the contribution of British lay judges’ workplace knowledge is less important than that of their German counterparts. British lay judges essentially only sit on discrimination cases where the law, the purview of the professional judge, is complex. They do not sit routinely on unfair dismissal cases where workplace practices are often a key factor in the decision. Also, British professional judges are less reliant on their lay judges than German professional judges. Whereas the latter are career judges, the former only become professional judges after having had several years’ experience in the law and many know, albeit indirectly through their clients, about a wide range of workplace practices. In conclusion, whether or if the professional judge dominates lay judges in labour courts varies by country, although this is mediated by the attitudes of the interactionists.

Keywords: cross-national comparisons, labour courts, professional judges, lay judges

Procedia PDF Downloads 292
7987 Enhancing the Flotation of Fine and Ultrafine Pyrite Particles Using Electrolytically Generated Bubbles

Authors: Bogale Tadesse, Krutik Parikh, Ndagha Mkandawire, Boris Albijanic, Nimal Subasinghe

Abstract:

It is well established that the floatability and selectivity of mineral particles are highly dependent on the particle size. Generally, a particle size of 10 micron is considered as the critical size below which both flotation selectivity and recovery decline sharply. It is widely accepted that the majority of ultrafine particles, including highly liberated valuable minerals, will be lost in tailings during a conventional flotation process. This is highly undesirable particularly in the processing of finely disseminated complex and refractory ores where there is a requirement for fine grinding in order to liberate the valuable minerals. In addition, the continuing decline in ore grade worldwide necessitates intensive processing of low grade mineral deposits. Recent advances in comminution allow the economic grinding of particles down to 10 micron sizes to enhance the probability of liberating locked minerals from low grade ores. Thus, it is timely that the flotation of fine and ultrafine particles is improved in order to reduce the amount of valuable minerals lost as slimes. It is believed that the use of fine bubbles in flotation increases the bubble-particle collision efficiency and hence the flotation performance. Electroflotation, where bubbles are generated by the electrolytic breakdown of water to produce oxygen and hydrogen gases, leads to the formation of extremely finely dispersed gas bubbles with dimensions varying from 5 to 95 micron. The sizes of bubbles generated by this method are significantly smaller than those found in conventional flotation (> 600 micron). In this study, microbubbles generated by electrolysis of water were injected into a bench top flotation cell to assess the performance electroflotation in enhancing the flotation of fine and ultrafine pyrite particles of sizes ranging from 5 to 53 micron. The design of the cell and the results from optimization of the process variables such as current density, pH, percent solid and particle size will be presented at this conference.

Keywords: electroflotation, fine bubbles, pyrite, ultrafine particles

Procedia PDF Downloads 336
7986 Comparison of Shell-Facemask Responses in American Football Helmets during NOCSAE Drop Tests

Authors: G. Alston Rush, Gus A. Rush III, M. F. Horstemeyer

Abstract:

This study compares the shell-facemask responses of four commonly used American football helmets, under the National Operating Committee on Standards for Athletic Equipment (NOCSAE) drop impact test method, to show that the test standard would more accurately simulate in-use conditions by modification to include the facemask. In our study, the need for a more vigorous systematic approach to football helmet testing procedures is emphasized by comparing the Head Injury Criterion (HIC), the Gadd Severity Index (SI), and peak acceleration values for different helmets at different locations on the helmet under modified NOCSAE standard drop tower tests. Drop tests were performed on the Rawlings Quantum Plus, Riddell 360, Schutt Ion 4D, and Xenith X2 helmets at eight impact locations, impact velocities of 5.46 and 4.88 meters per second, and helmet configurations with and without facemasks. Analysis of NOCSAE drop test results reveal significant differences (p < 0.05) for when the facemasks were attached to helmets, as compared to the NOCSAE Standard, without facemask configuration. The boundary conditions of the facemask attachment can have up to a 50% decrease (p < 0.001) in helmet performance with respect to peak acceleration. While generally, all helmets with the facemasks gave greater HIC, SI, and acceleration values than helmets without the facemasks, significant helmet dependent variations were observed across impact locations and impact velocities. The variations between helmet responses could be attributed to the unique design features of each helmet tested, which include different liners, chin strap attachments, and faceguard attachment systems. In summary, these comparative drop test results revealed that the current NOCSAE standard test methods need improvement by attaching the facemasks to helmets during testing. The modified NOCSAE football helmet standard test gives a more accurate representation of a helmet’s performance and its ability to mitigate the on-field impact.

Keywords: football helmet testing, gadd severity index, head injury criterion, mild traumatic brain injury

Procedia PDF Downloads 447
7985 Sizing of Drying Processes to Optimize Conservation of the Nuclear Power Plants on Stationary

Authors: Assabo Mohamed, Bile Mohamed, Ali Farah, Isman Souleiman, Olga Alos Ramos, Marie Cadet

Abstract:

The life of a nuclear power plant is regularly punctuated by short or long period outages to carry out maintenance operations and/or nuclear fuel reloading. During these stops periods, it is essential to conserve all the secondary circuit equipment to avoid corrosion priming. This kind of circuit is one of the main components of a nuclear reactor. Indeed, the conservation materials on shutdown of a nuclear unit improve circuit performance and reduce the maintenance cost considerably. This study is a part of the optimization of the dry preservation of equipment from the water station of the nuclear reactor. The main objective is to provide tools to guide Electricity Production Nuclear Centre (EPNC) in order to achieve the criteria required by the chemical specifications of conservation materials. A theoretical model of drying exchangers of water station is developed by the software Engineering Equation Solver (EES). It used to size requirements and air quality needed for dry conservation of equipment. This model is based on heat transfer and mass transfer governing the drying operation. A parametric study is conducted to know the influence of aerothermal factor taking part in the drying operation. The results show that the success of dry conservation of equipment of the secondary circuit of nuclear reactor depends strongly on the draining, the quality of drying air and the flow of air injecting in the secondary circuit. Finally, theoretical case study performed on EES highlights the importance of mastering the entire system to balance the air system to provide each exchanger optimum flow depending on its characteristics. From these results, recommendations to nuclear power plants can be formulated to optimize drying practices and achieve good performance in the conservation of material from the water at the stop position.

Keywords: dry conservation, optimization, sizing, water station

Procedia PDF Downloads 262
7984 Prevalence of Common Mental Disorders and Its Correlation with Mental Toughness among Professional South African Rugby Players

Authors: H. B. Grobler, K. Du Plooy, P. Kruger, S. Ellis

Abstract:

Objectives: The primary objective of the study was to determine the common mental disorders (CMD) identified by professional South African rugby players and its correlation with their mental toughness, as a first step towards developing such a programme within a larger research project. Design: Survey research, within the theoretical perspective of field theory, was conducted, utilising an adaptation of an already existing mental health questionnaire. The aim was to obtain feedback from as many possible professional South African rugby players in order to make certain generalizations and come to conclusions with regard to the current mental health experiences of these rugby players. Methods: Non-randomized sampling was done, linking it with internet research in the form of the online completion of a questionnaire. A sample of 215 rugby players participated and completed the online questionnaire. Permission was obtained to make use of an existing questionnaire, previously used by the specific authors with retired professional rugby players. A section on mental toughness was added. Data were descriptively analysed by means of the SPSS software platform. Results: Results indicated that the most significant problem that the players are experiencing, is a problem with alcohol (47.9%). Other problems that featured are distress (16.3%), sleep disturbances (7%), as well as anxiety and depression (4.2%). 4.7% of the players indicated that they smoke. 3.3% of the players experience themselves as not being mentally tough. A positive correlation between mental toughness and sound sleep (0.262) was found while a negative correlation was found between mental toughness and the following: anxiety/depression (-0.401), anxiety/depression positive (-0.423), distress (-0.259) and common mental disorder problems in general (-0.220). Conclusions: Although the presence of CMD at first glance do not seem significantly high amongst all the players, it must be considered that if one player in a team experiences the presence of CMD, it will have an impact on his mental toughness and most likely on his performance, as well as on the performance of the whole team. It is therefore important to ensure mental health in the whole team, by addressing individual CMD problems. A mental health support programme is therefore needed to be implemented to the benefit of these players within the South African context.

Keywords: common mental disorders, mental toughness, professional athletes, rugby players

Procedia PDF Downloads 218
7983 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings

Authors: Fernandez liporace, Mercedes Uriel Fabiana

Abstract:

Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.

Keywords: academic achievement, higher education, high school, psychological assessment

Procedia PDF Downloads 369
7982 Clinical Applications of Amide Proton Transfer Magnetic Resonance Imaging: Detection of Brain Tumor Proliferative Activity

Authors: Fumihiro Ima, Shinichi Watanabe, Shingo Maeda, Haruna Imai, Hiroki Niimi

Abstract:

It is important to know growth rate of brain tumors before surgery because it influences treatment planning including not only surgical resection strategy but also adjuvant therapy after surgery. Amide proton transfer (APT) imaging is an emerging molecular magnetic resonance imaging (MRI) technique based on chemical exchange saturation transfer without administration of contrast medium. The underlying assumption in APT imaging of tumors is that there is a close relationship between the proliferative activity of the tumor and mobile protein synthesis. We aimed to evaluate the diagnostic performance of APT imaging of pre-and post-treatment brain tumors. Ten patients with brain tumor underwent conventional and APT-weighted sequences on a 3.0 Tesla MRI before clinical intervention. The maximum and the minimum APT-weighted signals (APTWmax and APTWmin) in each solid tumor region were obtained and compared before and after clinical intervention. All surgical specimens were examined for histopathological diagnosis. Eight of ten patients underwent adjuvant therapy after surgery. Histopathological diagnosis was glioma in 7 patients (WHO grade 2 in 2 patients, WHO grade 3 in 3 patients and WHO grade 4 in 2 patients), meningioma WHO grade1 in 2 patients and primary lymphoma of the brain in 1 patient. High-grade gliomas showed significantly higher APTW-signals than that in low-grade gliomas. APTWmax in one huge parasagittal meningioma infiltrating into the skull bone was higher than that in glioma WHO grade 4. On the other hand, APTWmax in another convexity meningioma was the same as that in glioma WHO grade 3. Diagnosis of primary lymphoma of the brain was possible with APT imaging before pathological confirmation. APTW-signals in residual tumors decreased dramatically within one year after adjuvant therapy in all patients. APT imaging demonstrated excellent diagnostic performance for the planning of surgery and adjuvant therapy of brain tumors.

Keywords: amides, magnetic resonance imaging, brain tumors, cell proliferation

Procedia PDF Downloads 139
7981 University Students' Perspectives on a Mindfulness-Based App for Weight, Weight Related Behaviors, and Stress: A Qualitative Focus Group Study

Authors: Lynnette Lyzwinski, Liam Caffery, Matthew Bambling, Sisira Edirippulige

Abstract:

Introduction: A novel method of delivering mindfulness interventions for populations at risk of weight gain and stress-related eating, in particular, college students, is through mHealth. While there have been qualitative studies on mHealth for weight loss, there has not been a study on mHealth for weight loss using mindfulness that has explored student perspectives on a student centred mindfulness app and mindfulness-based text messages for eating and stress. Student perspective data will provide valuable information for creating a specific purpose weight management app and mindfulness-based text messages (for the Mindfulness App study). Methods: A qualitative focus group study was undertaken at St Lucia campus at the University of Queensland in March 2017. Students over the age of 18 were eligible to participate. Interviews were audiotaped and transcribed. One week following the focus group, students were sent sample mindfulness-based text messages based on their responses. Students provided written feedback via email. Data were analysed using N Vivo software. Results: The key themes in a future mindfulness-based app are a simple design interface, a focus on education/practical tips, and real-life practical exercises. Social media should be avoided. Key themes surrounding barriers include the perceived difficulty of mindfulness and a lack of proper guidance or knowledge. The mindfulness-based text messages were received positively. Key themes were creating messages with practical tips about how to be mindful and how to integrate mindful reflection of both one’s body and environment while on campus. Other themes including creating positive, inspirational messages. There was lack of agreement on the ideal timing for messages. Discussion: This is the first study that explored student perspectives on a mindfulness-app and mindfulness-based text messages for stress and weight management as a pre-trial study for the Mindfulness App trial for stress, lifestyle, and weight in students. It is important to consider maximizing the potential facilitators of use and minimize potential identified barriers when developing and designing a future mHealth mindfulness-based intervention tailored to the student consumer. Conclusion: Future mHealth studies may consider integrating mindfulness-based text messages in their interventions for weight and stress as this is a novel feature that appears to be acceptable for participants. The results of this focus group provide the basis to develop content for a specific purpose student app for weight management.

Keywords: mindfulness, college students, mHealth, weight loss

Procedia PDF Downloads 198
7980 The Effectiveness of Guest Lecturers with Disabilities in the Classroom

Authors: Afshin Gharib

Abstract:

Often, instructors prefer to bring into class a guest lecturer who can provide an “experiential” perspective on a particular topic. The assumption is that the personal experience brought into the classroom makes the material resonate more with students and that students would have a preference for material being taught from an experiential perspective. The question we asked in the present study was whether a guest lecture from an “experiential” expert with a disability (e.g. a guest suffering from cone-rod dystrophy lecturing on vision, or a dyslexic lecturing on the psychology of reading) would be more effective than the course instructor in capturing students attention and conveying information in an Introduction to Psychology class. Students in two sections of Introduction to Psychology (N = 25 in each section) listened to guest lecturers with disabilities lecturing on a topic related to their disability, one in the area of Sensation and Perception (the guest lecturer is vision impaired) and one in the area of Language Development (the guest lecturer is dyslexic). The Guest lecturers lectured on the same topic in both sections, however, each lecturer used their own experiences to highlight the topics they cover in one section but not the other (counterbalanced between sections), providing students in one section with experiential testimony. Following each of the 4 lectures (two experiential, two non-experiential) students rated the lecture on several dimensions including overall quality, level of engagement, and performance. In addition, students in both sections were tested on the same test items from the lecture material to ascertain degree of learning, and given identical “pop” quizzes two weeks after the exam to measure retention. It was hypothesized that students would find the experiential lectures from lecturers talking about their disabilities more engaging, learn more from them, and retain the material for longer. We found that students in fact preferred the course instructor to the guests, regardless of whether the guests included a discussion of their own disability in their lectures. Performance on the exam questions and the pop quiz items were not different between “experiential” and “non-experiential” lectures, suggesting that guest lecturers who discuss their own disabilities in lecture are not more effective in conveying material and students are not more likely to retain material delivered by “experiential” guests. In future research we hope to explore the reasons for students preference for their regular instructor over guest lecturers.

Keywords: guest lecturer, student perception, retention, experiential

Procedia PDF Downloads 18
7979 The Follower Robots Tested in Different Lighting Condition and Improved Capabilities

Authors: Sultan Muhammed Fatih Apaydin

Abstract:

In this study, two types of robot were examined as being pioneer robot and follower robot for improving of the capabilities of tracking robots. Robots continue to tracking each other and measurement of the follow-up distance between them is very important for improvements to be applied. It was achieved that the follower robot follows the pioneer robot in line with intended goals. The tests were applied to the robots in various grounds and environments in point of performance and necessary improvements were implemented by measuring the results of these tests.

Keywords: mobile robot, remote and autonomous control, infra-red sensors, arduino

Procedia PDF Downloads 566
7978 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 248
7977 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 90