Search results for: real output
993 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material
Authors: Rossella Resi
Abstract:
This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.Keywords: ICT, L2, language learning, language mediation, subtitling
Procedia PDF Downloads 416992 Statistical Investigation Projects: A Way for Pre-Service Mathematics Teachers to Actively Solve a Campus Problem
Authors: Muhammet Şahal, Oğuz Köklü
Abstract:
As statistical thinking and problem-solving processes have become increasingly important, teachers need to be more rigorously prepared with statistical knowledge to teach their students effectively. This study examined preservice mathematics teachers' development of statistical investigation projects using data and exploratory data analysis tools, following a design-based research perspective and statistical investigation cycle. A total of 26 pre-service senior mathematics teachers from a public university in Turkiye participated in the study. They formed groups of 3-4 members voluntarily and worked on their statistical investigation projects for six weeks. The data sources were audio recordings of pre-service teachers' group discussions while working on their projects in class, whole-class video recordings, and each group’s weekly and final reports. As part of the study, we reviewed weekly reports, provided timely feedback specific to each group, and revised the following week's class work based on the groups’ needs and development in their project. We used content analysis to analyze groups’ audio and classroom video recordings. The participants encountered several difficulties, which included formulating a meaningful statistical question in the early phase of the investigation, securing the most suitable data collection strategy, and deciding on the data analysis method appropriate for their statistical questions. The data collection and organization processes were challenging for some groups and revealed the importance of comprehensive planning. Overall, preservice senior mathematics teachers were able to work on a statistical project that contained the formulation of a statistical question, planning, data collection, analysis, and reaching a conclusion holistically, even though they faced challenges because of their lack of experience. The study suggests that preservice senior mathematics teachers have the potential to apply statistical knowledge and techniques in a real-world context, and they could proceed with the project with the support of the researchers. We provided implications for the statistical education of teachers and future research.Keywords: design-based study, pre-service mathematics teachers, statistical investigation projects, statistical model
Procedia PDF Downloads 85991 An Investigation of Peptide Functionalized Gold Nanoparticles On Colon Cancer Cells For Biomedical Application
Authors: Rolivhuwa Bishop Ramagoma1*, Lynn Cairncross1, , Saartjie Roux1
Abstract:
According to the world health organisation, colon cancer is among the most common cancers diagnosed in both men and women. Specifically, it is the second leading cause of cancer related deaths accounting for over 860 000 deaths worldwide in 2018. Currently, chemotherapy has become an essential component of most cancer treatments. Despite progress in cancer drug development over the previous years, traditional chemotherapeutic drugs still have low selectivity for targeting tumour tissues and are frequently constrained by dose-limiting toxicity. The creation of nanoscale delivery vehicles capable of directly directing treatment into cancer cells has recently caught the interest of researchers. Herein, the development of peptide-functionalized polyethylene glycol gold nanoparticles (Peptide-PEG-AuNPs) as a cellular probe and delivery agent is described, with the higher aim to develop a specific diagnostic prototype and assess their specificity not only against cell lines but primary human cells as well. Gold nanoparticles (AuNPs) were synthesized and stabilized through chemical conjugation. The synthesized AuNPs were characterized, stability in physiological solutions was assessed, their cytotoxicity against colon carcinoma and non-carcinoma skin fibroblasts was also studied. Furthermore, genetic effect through real-time polymerase chain reaction (RT-PCR), localization and uptake, peptide specificity were also determined. In this study, different peptide-AuNPs were found to have preferential toxicity at higher concentrations, as revealed by cell viability assays, however, all AuNPs presented immaculate stability for over 3 months following the method of synthesis. The final obtained peptide-PEG-AuNP conjugates showed good biocompatibility in the presence of high ionic solutions and biological media and good cellular uptake. Formulation of colon cancer specific targeting peptide was successful, additionally, the genes/pathways affected by the treatments were determined through RT-PCR. Primary cells study is still on going with promising results thus far.Keywords: nanotechnology, cancer, diagnosis, therapeutics, gold nanoparticles.
Procedia PDF Downloads 94990 Renewable Energy and Environment: Design of a Decision Aided Tool for Sustainable Development
Authors: Mustapha Ouardouz, Mina Amharref, Abdessamed Bernoussi
Abstract:
The future energy, for limited energy resources countries, goes through renewable energies (solar, wind etc.). The renewable energies constitute a major component of the energy strategy to cover a substantial part of the growing needs and contribute to environmental protection by replacing fossil fuels. Indeed, sustainable development involves the promotion of renewable energy and the preservation of the environment by the use of clean energy technologies to limit emissions of greenhouse gases and reducing the pressure exerted on the forest cover. So the impact studies, of the energy use on the environment and farm-related risks are necessary. For that, a global approach integrating all the various sectors involved in such project seems to be the best approach. In this paper we present an approach based on the multi criteria analysis and the realization of one pilot to achieve the development of an innovative geo-intelligent environmental platform. An implementation of this platform will collect, process, analyze and manage environmental data in connection with the nature of used energy in the studied region. As an application we consider a region in the north of Morocco characterized by intense agricultural and industrials activities and using diverse renewable energy. The strategic goals of this platform are; the decision support for better governance, improving the responsiveness of public and private companies connected by providing them in real time with reliable data, modeling and simulation possibilities of energy scenarios, the identification of socio-technical solutions to introduce renewable energies and estimate technical and implantable potential by socio-economic analyzes and the assessment of infrastructure for the region and the communities, the preservation and enhancement of natural resources for better citizenship governance through democratization of access to environmental information, the tool will also perform simulations integrating environmental impacts of natural disasters, particularly those linked to climate change. Indeed extreme cases such as floods, droughts and storms will be no longer rare and therefore should be integrated into such projects.Keywords: renewable energies, decision aided tool, environment, simulation
Procedia PDF Downloads 459989 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 325988 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors
Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara
Abstract:
Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement
Procedia PDF Downloads 121987 An International Comparison of Global Financial Centers: Major Competitive Strategies
Authors: I. Hakki Eraslan, Birol Ozturk, Istemi Comlekci
Abstract:
This paper begins by defining what is meant by globalization in finance and by identifying the sources of value-added in the internationally-competitive financial services sector origination, trading and distribution of debt and equity capital market instruments and their derivatives, foreign exchange trading and securities brokerage, management of market risk and credit risk, loan syndication and structured bank financings, corporate finance and advisory services, and asset management. These activities are considered in terms of a value-chain one that ultimately gives rise to the real economic gains attributable to financial-center operations. The research presents available evidence as to where the relevant value-added activities usually take place. It then examines the centrifugal and centripetal forces that determine the concentration or dispersal of value-added activity in financial intermediation, both interregionally and internationally. Next, the research assesses the factors, which appear to underlie the locational pattern of international financial centers that has evolved. In preparing this paper, also it is examined the current position and the main opportunities and challenges facing world major financial services sector, and attempted to lay out a potential vision and strategies. It is conducted extensive research, including many internal research materials and publications. It is also engaged closely with the academia, industry practitioners and regulators, and consulted market experts from major world financial centers. More than 60 in‐depth consultative sessions were conducted in the past two years which provided insightful suggestions and innovative ideas on how to further financial industry’s position as an international financial centre. The paper concludes with the outlook for the future pattern of financial centers in the global competitive environment. The ideas and advice gathered are condensed into this paper that recommends to the strategic decision leaders a vision and a strategy for financial services sector to move forward amid a highly competitive environment.Keywords: financial centers, competitiveness, financial services industry, economics
Procedia PDF Downloads 404986 Automatic Furrow Detection for Precision Agriculture
Authors: Manpreet Kaur, Cheol-Hong Min
Abstract:
The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.Keywords: furrow detection, morphological, HSV, Hough transform
Procedia PDF Downloads 231985 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 208984 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 264983 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device
Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer
Procedia PDF Downloads 57982 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 149981 Spatio-Temporal Analysis of Land Use Change and Green Cover Index
Authors: Poonam Sharma, Ankur Srivastav
Abstract:
Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning
Procedia PDF Downloads 278980 Engaging Students in Learning through Visual Demonstration Models in Engineering Education
Authors: Afsha Shaikh, Mohammed Azizur Rahman, Ibrahim Hassan, Mayur Pal
Abstract:
Student engagement in learning is instantly affected by the sources of learning methods available for them, such as videos showing the applications of the concept or showing a practical demonstration. Specific to the engineering discipline, there exist enormous challenging concepts that can be simplified when they are connected to real-world scenarios. For this study, the concept of heat exchangers was used as it is a part of multidisciplinary engineering fields. To make the learning experience enjoyable and impactful, 3-D printed heat exchanger models were created for students to use while working on in-class activities and assignments. Students were encouraged to use the 3-D printed heat exchanger models to enhance their understanding of theoretical concepts associated with its applications. To assess the effectiveness of the method, feedback was received by students pursuing undergraduate engineering via an anonymous electronic survey. To make the feedback more realistic, unbiased, and genuine, students spent nearly two to three weeks using the models in their in-class assignments. The impact of these tools on their learning was assessed through their performance in their ungraded assignments as well as their interactive discussions with peers. ‘Having to apply the theory learned in class whilst discussing with peers on a class assignment creates a relaxed and stress-free learning environment in classrooms’; this feedback was received by more than half the students who took the survey and found 3-D models of heat exchanger very easy to use. Amongst many ways to enhance learning and make students more engaged through interactive models, this study sheds light on the importance of physical tools that help create a lasting mental representation in the minds of students. Moreover, in this technologically enhanced era, the concept of augmented reality was considered in this research. E-drawings application was recommended to enhance the vision of engineering students so they can see multiple views of the detailed 3-D models and cut through its different sides and angles to visualize it properly. E-drawings could be the next tool to implement in classrooms to enhance students’ understanding of engineering concepts.Keywords: student engagement, life-long-learning, visual demonstration, 3-D printed models, engineering education
Procedia PDF Downloads 115979 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that can use the large amount and variety of data generated during healthcare services every day; one of the significant advantages of these new technologies is the ability to get experience and knowledge from real-world use and to improve their performance continuously. Healthcare systems and institutions can significantly benefit because the use of advanced technologies improves the efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and protect patients' safety. The evolution and the continuous improvement of software used in healthcare must consider the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device's approval. Still, they are necessary to ensure performance, quality, and safety. At the same time, they can be a business opportunity if the manufacturer can define the appropriate regulatory strategy in advance. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems
Procedia PDF Downloads 88978 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 502977 Effectiveness of Research Promotion Organizations in Higher Education and Research (ESR)
Authors: Jonas Sanon
Abstract:
The valorization of research is becoming a transversal instrument linking different sectors (academic, public and industrial). The practice of valorization seems to impact innovation techniques within companies where, there is often the implementation of industrial conventions of training through research (CIFRE), continuous training programs for employees, collaborations and partnerships around joint research and R&D laboratories focused on the needs of companies to improve or develop more efficient innovations. Furthermore, many public initiatives to support innovation and technology transfer have been developed at the international, European and national levels, with significant budget allocations. Thus, in the context of this work, we tried to analyze the way in which research transfer structures are evaluated within the Saclay ecosystem. In fact, the University-Paris-Saclay is one of the best French universities; it is made up of 10 university components, more than 275 laboratories and is in partnership with the largest French research centers This work mainly focused on how evaluations affected research transfer structures, how evaluations were conducted, and what the managers of research transfer structures thought about assessments. Thus, with the aid of the conducted interviews, it appears that the evaluations do not have a significant impact on the qualitative aspect of research and innovation, but is rather present a directive aspect to allow the structures to benefit or not from the financial resources to develop certain research work, sometimes directed and influenced by the market, some researchers might try to accentuate their research and experimentation work on themes that are not necessarily their areas of interest, but just to comply with the calls for proposed thematic projects. The field studies also outline the primary indicators used to assess the effectiveness of valorization structures as "the number of start-ups generated, the license agreements signed, the structure's patent portfolio, and the innovations of items developed from public research.". Finally, after mapping the actors, it became clear that the ecosystem of the University of Paris-Saclay benefits from a richness allowing it to better value its research in relation to the three categories of actors it has (internal, external and transversal), united and linked by a relationship of proximity of sharing and endowed with a real opportunity to innovate openly.Keywords: research valorization, technology transfer, innovation, evaluation, impacts and performances, innovation policy
Procedia PDF Downloads 74976 Exergetic Optimization on Solid Oxide Fuel Cell Systems
Authors: George N. Prodromidis, Frank A. Coutelieris
Abstract:
Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.Keywords: biogas, exergy, efficiency, optimization
Procedia PDF Downloads 370975 Unveiling the Potential of MoSe₂ for Toxic Gas Sensing: Insights from Density Functional Theory and Non-equilibrium Green’s Function Calculations
Authors: Si-Jie Ji, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
With the rapid development of industrialization and urbanization, air pollution poses significant global environmental challenges, contributing to acid rain, global warming, and adverse health effects. Therefore, it is necessary to monitor the concentration of toxic gases in the atmospheric environment in real-time and to deploy cost-effective gas sensors capable of detecting their emissions. In this study, we systematically investigated the sensing capabilities of the two-dimensional MoSe₂ for seven key environmental gases (NO, NO₂, CO, CO₂, SO₂, SO₃, and O₂) using density functional theory (DFT) and non-equilibrium Green’s function (NEGF) calculations. We also investigated the impact of H₂O as an interfering gas. Our results indicate that the MoSe₂ monolayer is thermodynamically stable and exhibits strong gas-sensing capabilities. The calculated adsorption energies indicate that these gases can stably adsorb on MoSe₂, with SO₃ exhibiting the strongest adsorption energy (-0.63 eV). Electronic structure analysis, including projected density of states (PDOS) and Bader charge analysis, demonstrates significant changes in the electronic properties of MoSe₂ upon gas adsorption, affecting its conductivity and sensing performance. We find that oxygen (O₂) adsorption notably influenced the deformation of MoSe₂. To comprehensively understand the potential of MoSe₂ as a gas sensor, we used the NEGF method to assess the electronic transport properties of MoSe₂ under gas adsorption, evaluating current-voltage (I-V), resistance-voltage (R-V) characteristics, and transmission spectra to determine sensitivity, selectivity, and recovery time compared to pristine MoSe₂. Sensitivity, selectivity, and recovery time are analyzed at a bias voltage of 1.7V, showing excellent performance of MoSe₂ in detecting SO₃, among other gases. The pronounced changes in electronic transport behavior induced by SO₃ adsorption confirm MoSe₂’s strong potential as a high-performance gas-sensing material. Overall, this theoretical study provides new insights into the development of high-performance gas sensors, demonstrating the potential of MoSe₂ as a gas-sensing material, particularly for gases like SO₃.Keywords: density functional theory, gas sensing, MoSe₂, non-equilibrium Green’s function, SO
Procedia PDF Downloads 22974 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 177973 The Challenges of Digital Crime Nowadays
Authors: Bendes Ákos
Abstract:
Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism
Procedia PDF Downloads 63972 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions
Authors: Varvara Roubtsova, Mohamed Chekired
Abstract:
Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics
Procedia PDF Downloads 302971 Affective Transparency in Compound Word Processing
Authors: Jordan Gallant
Abstract:
In the compound word processing literature, much attention has been paid to the relationship between a compound’s denotational meaning and that of its morphological whole-word constituents, which is referred to as ‘semantic transparency’. However, the parallel relationship between a compound’s connotation and that of its constituents has not been addressed at all. For instance, while a compound like ‘painkiller’ might be semantically transparent, it is not ‘affectively transparent’. That is, both constituents have primarily negative connotations, while the whole compound has a positive one. This paper investigates the role of affective transparency on compound processing using two methodologies commonly employed in this field: a lexical decision task and a typing task. The critical stimuli used were 112 English bi-constituent compounds that differed in terms of the effective transparency of their constituents. Of these, 36 stimuli contained constituents with similar connotations to the compound (e.g., ‘dreamland’), 36 contained constituents with more positive connotations (e.g. ‘bedpan’), and 36 contained constituents with more negative connotations (e.g. ‘painkiller’). Connotation of whole-word constituents and compounds were operationalized via valence ratings taken from an off-line ratings database. In Experiment 1, compound stimuli and matched non-word controls were presented visually to participants, who were then asked to indicate whether it was a real word in English. Response times and accuracy were recorded. In Experiment 2, participants typed compound stimuli presented to them visually. Individual keystroke response times and typing accuracy were recorded. The results of both experiments provided positive evidence that compound processing is influenced by effective transparency. In Experiment 1, compounds in which both constituents had more negative connotations than the compound itself were responded to significantly more slowly than compounds in which the constituents had similar or more positive connotations. Typed responses from Experiment 2 showed that inter-keystroke intervals at the morphological constituent boundary were significantly longer when the connotation of the head constituent was either more positive or more negative than that of the compound. The interpretation of this finding is discussed in the context of previous compound typing research. Taken together, these findings suggest that affective transparency plays a role in the recognition, storage, and production of English compound words. This study provides a promising first step in a new direction for research on compound words.Keywords: compound processing, semantic transparency, typed production, valence
Procedia PDF Downloads 127970 Spatial Assessment of Creek Habitats of Marine Fish Stock in Sindh Province
Authors: Syed Jamil H. Kazmi, Faiza Sarwar
Abstract:
The Indus delta of Sindh Province forms the largest creeks zone of Pakistan. The Sindh coast starts from the mouth of Hab River and terminates at Sir Creek area. In this paper, we have considered the major creeks from the site of Bin Qasim Port in Karachi to Jetty of Keti Bunder in Thatta District. A general decline in the mangrove forest has been observed that within a span of last 25 years. The unprecedented human interventions damage the creeks habitat badly which includes haphazard urban development, industrial and sewage disposal, illegal cutting of mangroves forest, reduced and inconsistent fresh water flow mainly from Jhang and Indus rivers. These activities not only harm the creeks habitat but affected the fish stock substantially. Fishing is the main livelihood of coastal people but with the above-mentioned threats, it is also under enormous pressure by fish catches resulted in unchecked overutilization of the fish resources. This pressure is almost unbearable when it joins with deleterious fishing methods, uncontrolled fleet size, increase trash and by-catch of juvenile and illegal mesh size. Along with these anthropogenic interventions study area is under the red zone of tropical cyclones and active seismicity causing floods, sea intrusion, damage mangroves forests and devastation of fish stock. In order to sustain the natural resources of the Indus Creeks, this study was initiated with the support of FAO, WWF and NIO, the main purpose was to develop a Geo-Spatial dataset for fish stock assessment. The study has been spread over a year (2013-14) on monthly basis which mainly includes detailed fish stock survey, water analysis and few other environmental analyses. Environmental analysis also includes the habitat classification of study area which has done through remote sensing techniques for 22 years’ time series (1992-2014). Furthermore, out of 252 species collected, fifteen species from estuarine and marine groups were short-listed to measure the weight, health and growth of fish species at each creek under GIS data through SPSS system. Furthermore, habitat suitability analysis has been conducted by assessing the surface topographic and aspect derivation through different GIS techniques. The output variables then overlaid in GIS system to measure the creeks productivity. Which provided the results in terms of subsequent classes: extremely productive, highly productive, productive, moderately productive and less productive. This study has revealed the Geospatial tools utilization along with the evaluation of the fisheries resources and creeks habitat risk zone mapping. It has also been identified that the geo-spatial technologies are highly beneficial to identify the areas of high environmental risk in Sindh Creeks. This has been clearly discovered from this study that creeks with high rugosity are more productive than the creeks with low levels of rugosity. The study area has the immense potential to boost the economy of Pakistan in terms of fish export, if geo-spatial techniques are implemented instead of conventional techniques.Keywords: fish stock, geo-spatial, productivity analysis, risk
Procedia PDF Downloads 245969 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors
Procedia PDF Downloads 435968 The Effects of Advisor Status and Time Pressure on Decision-Making in a Luggage Screening Task
Authors: Rachel Goh, Alexander McNab, Brent Alsop, David O'Hare
Abstract:
In a busy airport, the decision whether to take passengers aside and search their luggage for dangerous items can have important consequences. If an officer fails to search and stop a bag containing a dangerous object, a life-threatening incident might occur. But stopping a bag unnecessarily means that the officer might lose time searching the bag and face an angry passenger. Passengers’ bags, however, are often cluttered with personal belongings of varying shapes and sizes. It can be difficult to determine what is dangerous or not, especially if the decisions must be made quickly in cases of busy flight schedules. Additionally, the decision to search bags is often made with input from the surrounding officers on duty. This scenario raises several questions: 1) Past findings suggest that humans are more reliant on an automated aid when under time pressure in a visual search task, but does this translate to human-human reliance? 2) Are humans more likely to agree with another person if the person is assumed to be an expert or a novice in these ambiguous situations? In the present study, forty-one participants performed a simulated luggage-screening task. They were partnered with an advisor of two different statuses (expert vs. novice), but of equal accuracy (90% correct). Participants made two choices each trial: their first choice with no advisor input, and their second choice after advisor input. The second choice was made within either 2 seconds or 8 seconds; failure to do so resulted in a long time-out period. Under the 2-second time pressure, participants were more likely to disagree with their own first choice and agree with the expert advisor, regardless of whether the expert was right or wrong, but especially when the expert suggested that the bag was safe. The findings indicate a tendency for people to assume less responsibility for their decisions and defer to their partner, especially when a quick decision is required. This over-reliance on others’ opinions might have negative consequences in real life, particularly when relying on fallible human judgments. More awareness is needed regarding how a stressful environment may influence reliance on other’s opinions, and how better techniques are needed to make the best decisions under high stress and time pressure.Keywords: advisors, decision-making, time pressure, trust
Procedia PDF Downloads 173967 Relationship-Centred Care in Cross-Linguistic Medical Encounters
Authors: Nami Matsumoto
Abstract:
This study explores the experiences of cross-linguistic medical encounters by patients, and their views of receiving language support therein, with a particular focus on Japanese-English cases. The aim of this study is to investigate the reason for the frequent use of a spouse as a communication mediator from a Japanese perspective, through a comparison with that of English speakers. This study conducts an empirical qualitative analysis of the accounts of informants. A total of 31 informants who have experienced Japanese-English cross-linguistic medical encounters were recruited in Australia and Japan for semi-structured in-depth interviews. A breakdown of informants is 15 English speakers and 16 Japanese speakers. In order to obtain a further insight into collected data, additional interviews were held with 4 Australian doctors who are familiar with using interpreters. This study was approved by the Australian National University Human Research Ethics Committee, and written consent to participate in this study was obtained from all participants. The interviews lasted up to over one hour. They were audio-recorded and subsequently transcribed by the author. Japanese transcriptions were translated into English by the author. An analysis of interview data found that patients value relationship in communication. Particularly, Japanese informants, who have an English-speaking spouse, value trust-based communication interventions by their spouse, regardless of the language proficiency of the spouse. In Australia, health care interpreters are required to abide by the national code of ethics for interpreters. The Code defines the role of an interpreter exclusively to be language rendition and enshrines the tenets of accuracy, confidentiality and professional role boundaries. However, the analysis found that an interpreter who strictly complies with the Code sometimes fails to render the real intentions of the patient and their doctor. Findings from the study suggest that an interpreter should not be detached from the context and should be more engaged in the needs of patients. Their needs are not always communicated by an interpreter when they simply follow a professional code of ethics. The concept of relationship-centred care should be incorporated in the professional practice of health care interpreters.Keywords: health care, Japanese-English medical encounters, language barriers, trust
Procedia PDF Downloads 264966 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools
Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri
Abstract:
The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq
Procedia PDF Downloads 117965 New Public Management at Public Administration in Bangladesh: An Exploratory Study
Authors: Biback Das
Abstract:
New Public Management, a phenomenal tool, which is used to enforcing in public administration in different country’s to enhance the capacities. Since the 1980s, New Public Management (NPM) is primarily focusing to modernize the public sector. From the initial period, many developed countries such as UK, New Zealand, Australia, and the USA are applied in their administration to modernize. Almost 1990s, it has been applied in many developing countries. This study can describe the real situations of NPM based administration. Bangladesh Government has taken many projects to reform the public sector under NPM. Even many Development Agencies like UN, UNDP, World Bank, Asian Development Bank and so on, along with many developed countries also invested and prescribed to take NPM based reform that can to restructure the public sector so that it can maximize the efforts to provide the better service. This study examines using many factors that effects work on Public Administration in Bangladesh and also assessing its endeavor to adopt in it. Although Government has taken such initiatives to implement NPM originated reform, it’s not effectively been implemented to bring positive change about as per NPM objectives. This study mainly examines some initiatives in Bangladesh that have the influence of NPM as well as some drawbacks that can’t help the satisfaction of these initiatives. This article help to identify the efforts of many development agencies providing a fund to enhance the NPM based projects with their specific conditions that are prescribed by them helping to get fund. Therefore, to establish effective public management or to follow NPM model, Bangladesh need having an institutional framework, sound rule of law, proper structure, effective civil service system, appropriate checks, and balances to restructure the public sector help along with donor agencies ad implement in it. Bangladesh Government has applied its recent days to enhance the capabilities in its Public Administration. Moreover, this study mainly identifies how the designing strategies, program formulating, its implementation in various sector such as education, health sector etc. and how to reduce the backdrop the during problem by smooth functioning. This paper is also assessing the influence of many projects like PPP (Public-Private and Partnership) to work along with private organizations for smooth service delivery. Accordingly, this paper briefly reviews how it applies in a global context following the taken many initiatives and the consequences of Bangladesh context.Keywords: new public management, capacity building, conditionalities, service delivery, public-private-partnership
Procedia PDF Downloads 144964 The Effect of Durability and Pathogen Strains on the Wheat Induced Resistance against Zymoseptoria tritici as a Response to Paenibacillus sp. Strain B2
Authors: E. Samain, T. Aussenac, D. van Tuinen, S. Selim
Abstract:
Plant growth promoting rhizobacteria are known as potential biofertilizers and plant resistance inducers. The present work aims to study the durability of the resistance induced as a response to wheat seeds inoculation with PB2 and its influence by Z. tritici strains. The internal and external roots colonization have been determined in vitro, seven days post inoculation, by measuring the colony forming unit (CFU). In planta experimentations were done under controlled conditions included four wheat cultivars with different levels of resistance against Septoria Leaf Blotch (SLB) and four Z. tritici strains with high aggressiveness and resistance levels to fungicides. Plantlets were inoculated with PB2 at sowing and infected with Z. tritici at 3 leaves or tillering growth stages. The infection level with SLB was evaluated at 17 days post inoculation using real-time quantitative polymerase chain reaction (PCR). Results showed that PB2 has a high potential of wheat root external colonization (> 10⁶ CFU/g of root). However, the internal colonization seems to be cultivar dependent. Indeed, PB2 has not been observed as endophytic for one cultivar but has a high level of internal colonization with more than 104 CFU/g of root concerning the three others. Two wheat cultivars (susceptible and moderated resistant) were used to investigate PB2-induced resistance (PB2-IR). After the first infection with Z. tritici, results showed that PB2-IR has conferred a high protection efficiency (40-90%) against SLB in the two tested cultivars. Whereas the PB2-IR was effective against all tested strains with the moderate resistant cultivar, it was higher with the susceptible cultivar (> 64%) but against three of the four tested strains. Concerning the durability of the PB2-IR, after the second infection timing, it has been observed a significant decrease (10-59%) depending strains in the moderate resistant cultivar. Contrarily, the susceptible cultivar showed a stable and high protection level (76-84%) but against three of the four tested strains and interestingly, the strain that overcame PB2-IR was not the same as that of the first infection timing. To conclude, PB2 induces a high and durable resistance against Z. tritici. The PB2-IR is pathogen strain, plant growth stage and genotype dependent. These results may explain the loss of the induced resistance effectiveness under field conditions.Keywords: induced resistance, Paenibacillus sp. strain B2, wheat genotypes, Zymoseptoria tritici
Procedia PDF Downloads 149