Search results for: k-means clustering based feature weighting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28587

Search results for: k-means clustering based feature weighting

25587 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 25
25586 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 110
25585 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera

Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl

Abstract:

Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.

Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition

Procedia PDF Downloads 92
25584 Developing Artificial Neural Networks (ANN) for Falls Detection

Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai

Abstract:

The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.

Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold

Procedia PDF Downloads 476
25583 A Reinforcement Learning Based Method for Heating, Ventilation, and Air Conditioning Demand Response Optimization Considering Few-Shot Personalized Thermal Comfort

Authors: Xiaohua Zou, Yongxin Su

Abstract:

The reasonable operation of heating, ventilation, and air conditioning (HVAC) is of great significance in improving the security, stability, and economy of power system operation. However, the uncertainty of the operating environment, thermal comfort varies by users and rapid decision-making pose challenges for HVAC demand response optimization. In this regard, this paper proposes a reinforcement learning-based method for HVAC demand response optimization considering few-shot personalized thermal comfort (PTC). First, an HVAC DR optimization framework based on few-shot PTC model and DRL is designed, in which the output of few-shot PTC model is regarded as the input of DRL. Then, a few-shot PTC model that distinguishes between awake and asleep states is established, which has excellent engineering usability. Next, based on soft actor criticism, an HVAC DR optimization algorithm considering the user’s PTC is designed to deal with uncertainty and make decisions rapidly. Experiment results show that the proposed method can efficiently obtain use’s PTC temperature, reduce energy cost while ensuring user’s PTC, and achieve rapid decision-making under uncertainty.

Keywords: HVAC, few-shot personalized thermal comfort, deep reinforcement learning, demand response

Procedia PDF Downloads 57
25582 A Quantitative Analysis for the Correlation between Corporate Financial and Social Performance

Authors: Wafaa Salah, Mostafa A. Salama, Jane Doe

Abstract:

Recently, the corporate social performance (CSP) is not less important than the corporate financial performance (CFP). Debate still exists about the nature of the relationship between the CSP and CFP, whether it is a positive, negative or a neutral correlation. The objective of this study is to explore the relationship between corporate social responsibility (CSR) reports and CFP. The study uses the accounting-based and market-based quantitative measures to quantify the financial performance of seven organizations listed on the Egyptian Stock Exchange in 2007-2014. Then uses the information retrieval technologies to quantify the contribution of each of the three dimensions of the corporate social responsibility report (environmental, social and economic). Finally, the correlation between these two sets of variables is viewed together in a model to detect the correlations between them. This model is applied on seven firms that generate social responsibility reports. The results show a positive correlation between the Earnings per share (market based measure) and the economical dimension in the CSR report. On the other hand, total assets and property, plant and equipment (accounting-based measure) are positively correlated to the environmental and social dimensions of the CSR reports. While there is not any significant relationship between ROA, ROE, Operating income and corporate social responsibility. This study contributes to the literature by providing more clarification of the relationship between CFP and the isolated CSR activities in a developing country.

Keywords: financial, social, machine learning, corporate social performance, corporate social responsibility

Procedia PDF Downloads 291
25581 Active Learning Based on Science Experiments to Improve Scientific Literacy

Authors: Kunihiro Kamataki

Abstract:

In this study, active learning based on simple science experiments was developed in a university class of the freshman, in order to improve their scientific literacy. Through the active learning based on simple experiments of generation of cloud in a plastic bottle, students increased the interest in the global atmospheric problem and were able to discuss and find solutions about this problem positively from various viewpoints of the science technology, the politics, the economy, the diplomacy and the relations among nations. The results of their questionnaires and free descriptions of this class indicate that they improve the scientific literacy and motivations of other classroom lectures to acquire knowledge. It is thus suggested that the science experiment is strong tool to improve their intellectual curiosity rapidly and the connections that link the impression of science experiment and their interest of the social problem is very important to enhance their learning effect in this education.

Keywords: active learning, scientific literacy, simple scientific experiment, university education

Procedia PDF Downloads 238
25580 Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience

Authors: Kldiashvili Ekaterina, Burduli Archil, Ghortlishvili Gocha

Abstract:

Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations.

Keywords: digital images, medical information system, second opinion consultations, electronic medical record

Procedia PDF Downloads 430
25579 Narrative Constructs and Environmental Engagement: A Textual Analysis of Climate Fiction’s Role in Shaping Sustainability Consciousness

Authors: Dean J. Hill

Abstract:

This paper undertakes the task of conducting an in-depth textual analysis of the cli-fi genre. It examines how writing in the genre contributes to expressing and facilitating the articulation of environmental consciousness through the form of narrative. The paper begins by situating cli-fi within the literary continuum of ecological narratives and identifying the unique textual characteristics and thematic preoccupations of this area. The paper unfolds how cli-fi transforms the esoteric nature of climate science into credible narrative forms by drawing on language use, metaphorical constructs, and narrative framing. It also involves how descriptive and figurative language in the description of nature and disaster makes climate change so vivid and emotionally resonant. The work also points out the dialogic nature of cli-fi, whereby the characters and the narrators experience inner disputes in the novel regarding the ethical dilemma of environmental destruction, thus demanding the readers challenge and re-evaluate their standpoints on sustainability and ecological responsibilities. The paper proceeds with analysing the feature of narrative voice and its role in eliciting empathy, as well as reader involvement with the ecological material. In looking at how different narratorial perspectives contribute to the emotional and cognitive reaction of the reader to text, this study demonstrates the profound power of perspective in developing intimacy with the dominating concerns. Finally, the emotional arc of cli-fi narratives, running its course over themes of loss, hope, and resilience, is analysed in relation to how these elements function to marshal public feeling and discourse into action around climate change. Therefore, we can say that the complexity of the text in the cli-fi not only shows the hard edge of the reality of climate change but also influences public perception and behaviour toward a more sustainable future.

Keywords: cli-fi genre, ecological narratives, emotional arc, narrative voice, public perception

Procedia PDF Downloads 18
25578 Understanding Rural Teachers’ Perceived Intention of Using Play in ECCE Mathematics Classroom: Strength-Based Approach

Authors: Nyamela M. ‘Masekhohola, Khanare P. Fumane

Abstract:

The Lesotho downward trend in mathematics attainment at all levels is compounded by the absence of innovative approaches to teaching and learning in Early Childhood. However, studies have shown that play pedagogy can be used to mitigate the challenges of mathematics education. Despite the benefits of play pedagogy to rural learners, its full potential has not been realized in early childhood care and education classrooms to improve children’s performance in mathematics because the adoption of play pedagogy depends on a strength-based approach. The study explores the potential of play pedagogy to improve mathematics education in early childhood care and education in Lesotho. Strength-based approach is known for its advocacy of recognizing and utilizing children’s strengths, capacities and interests. However, this approach and its promisingattributes is not well-known in Lesotho. In particular, little is known about the attributes of play pedagogy that are essential to improve mathematic education in ECCE programs in Lesotho. To identify such attributes and strengthen mathematics education, this systematic review examines evidence published on the strengths of play pedagogy that supports the teaching and learning of mathematics education in ECCE. The purpose of this review is, therefore, to identify and define the strengths of play pedagogy that supports mathematics education. Moreover, the study intends to understand the rural teachers’ perceived intention of using play in ECCE math classrooms through a strength-based approach. Eight key strengths were found (cues for reflection, edutainment, mathematics language development, creativity and imagination, cognitive promotion, exploration, classification, and skills development). This study is the first to identify and define the strength-based attributes of play pedagogy to improve the teaching and learning of mathematics in ECCE centers in Lesotho. The findings reveal which opportunities teachers find important for improving the teaching of mathematics as early as in ECCE programs. We conclude by discussing the implications of the literature for stimulating dialogues towards formulating strength-based approaches to teaching mathematics, as well as reflecting on the broader contributions of play pedagogy as an asset to improve mathematics in Lesotho and beyond.

Keywords: early childhood education, mathematics education, lesotho, play pedagogy, strength-based approach.

Procedia PDF Downloads 116
25577 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking

Authors: Esmeralda Hysenbelliu

Abstract:

The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.

Keywords: improved quality of experience (QoE), OpenFlow SDN controller, IPTV service application, softwarization

Procedia PDF Downloads 131
25576 Isolation and Characterization of an Ethanol Resistant Bacterium from Sap of Saccharum officinarum for Efficient Fermentation

Authors: Rukshika S Hewawasam, Sisira K. Weliwegamage, Sanath Rajapakse, Subramanium Sotheeswaran

Abstract:

Bio fuel is one of the emerging industries around the world due to arise of crisis in petroleum fuel. Fermentation is a cost effective and eco-friendly process in production of bio-fuel. So inventions in microbes, substrates, technologies in fermentation cause new modifications in fermentation. One major problem in microbial ethanol fermentation is the low resistance of conventional microorganisms to the high ethanol concentrations, which ultimately lead to decrease in the efficiency of the process. In the present investigation, an ethanol resistant bacterium was isolated from sap of Saccharum officinarum (sugar cane). The optimal cultural conditions such as pH, temperature, incubation period, and microbiological characteristics, morphological characteristics, biochemical characteristics, ethanol tolerance, sugar tolerance, growth curve assay were investigated. Isolated microorganism was tolerated to 18% (V/V) of ethanol concentration in the medium and 40% (V/V) glucose concentration in the medium. Biochemical characteristics have revealed as Gram negative, non-motile, negative for Indole test ,Methyl Red test, Voges- Proskauer`s test, Citrate Utilization test, and Urease test. Positive results for Oxidase test was shown by isolated bacterium. Sucrose, Glucose, Fructose, Maltose, Dextrose, Arabinose, Raffinose, Lactose, and Sachcharose can be utilized by this particular bacterium. It is a significant feature in effective fermentation. The fermentation process was carried out in glucose medium under optimum conditions; pH 4, temperature 30˚C, and incubated for 72 hours. Maximum ethanol production was recorded as 12.0±0.6% (V/V). Methanol was not detected in the final product of the fermentation process. This bacterium is especially useful in bio-fuel production due to high ethanol tolerance of this microorganism; it can be used to enhance the fermentation process over conventional microorganisms. Investigations are currently conducted on establishing the identity of the bacterium

Keywords: bacterium, bio-fuel, ethanol tolerance, fermentation

Procedia PDF Downloads 316
25575 On the Exergy Analysis of the Aluminum Smelter

Authors: Ayoola T. Brimmo, Mohamed I. Hassan

Abstract:

The push to mitigate the aluminum smelting industry’s enormous energy consumption and high emission releases is now even more persistent with the recent climate change happenings. Common approaches to achieve this have been focused on improving energy efficiency in the pot line and cast house sections of the smelter. However, the conventional energy efficiency analyses are based on the first law of thermodynamics, which do not shed proper light on the smelter’s degradation of energy. This just gives a general idea of the furnace’s performance with no reference to locations where improvement is a possibility based on the second law of thermodynamics. In this study, we apply exergy analyses on the pot line and cast house sections of the smelter to identify the locality and causes of energy degradation. The exergy analyses, which are based on a real life smelter conditions, highlight the possible locations for technology improvement in a typical smelter. With this established, methods of minimizing the smelter’s exergy losses are assessed.

Keywords: exergy analysis, electrolytic cell, furnace, heat transfer

Procedia PDF Downloads 271
25574 Possible Reasons for and Consequences of Generalizing Subgroup-Based Measurement Results to Populations: Based on Research Studies Conducted by Elementary Teachers in South Korea

Authors: Jaejun Jong

Abstract:

Many teachers in South Korea conduct research to improve the quality of their instruction. Unfortunately, many researchers generalize the results of measurements based on one subgroup to other students or to the entire population, which can cause problems. This study aims to determine examples of possible problems resulting from generalizing measurements based on one subgroup to an entire population or another group. This study is needed, as teachers’ instruction and class quality significantly affect the overall quality of education, but the quality of research conducted by teachers can become questionable due to overgeneralization. Thus, finding potential problems of overgeneralization can improve the overall quality of education. The data in this study were gathered from 145 sixth-grade elementary school students in South Korea. The result showed that students in different classes could differ significantly in various ways; thus, generalizing the results of subgroups to an entire population can engender erroneous student predictions and evaluations, which can lead to inappropriate instruction plans. This result shows that finding the reasons for such overgeneralization can significantly improve the quality of education.

Keywords: generalization, measurement, research methodology, teacher education

Procedia PDF Downloads 80
25573 Task Scheduling and Resource Allocation in Cloud-based on AHP Method

Authors: Zahra Ahmadi, Fazlollah Adibnia

Abstract:

Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).

Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow

Procedia PDF Downloads 128
25572 Microfluidic Paper-Based Electrochemical Biosensor

Authors: Ahmad Manbohi, Seyyed Hamid Ahmadi

Abstract:

A low-cost paper-based microfluidic device (PAD) for the multiplex electrochemical determination of glucose, uric acid, and dopamine in biological fluids was developed. Using wax printing, PAD containing a central zone, six channels, and six detection zones was fabricated, and the electrodes were printed on detection zones using pre-made electrodes template. For each analyte, two detection zones were used. The carbon working electrode was coated with chitosan-BSA (and enzymes for glucose and uric acid). To detect glucose and uric acid, enzymatic reactions were employed. These reactions involve enzyme-catalyzed redox reactions of the analytes and produce free electrons for electrochemical measurement. Calibration curves were linear (R² > 0.980) in the range of 0-80 mM for glucose, 0.09–0.9 mM for dopamine, and 0–50 mM for uric acid, respectively. Blood samples were successfully analyzed by the proposed method.

Keywords: biological fluids, biomarkers, microfluidic paper-based electrochemical biosensors, Multiplex

Procedia PDF Downloads 265
25571 Comparison Performance between PID and PD Controllers for 3 and 4 Cable-Based Robots

Authors: Fouad. Inel, Lakhdar. Khochemane

Abstract:

This article presents a comparative response specification performance between two controllers of three and four cable based robots for various applications. The main objective of this work is: The first is to use the direct and inverse geometric model to study and simulate the end effector position of the robot with three and four cables. A graphical user interface has been implemented in order to visualizing the position of the robot. Secondly, we present the determination of static and dynamic tensions and lengths of cables required to flow different trajectories. At the end, we study the response of our systems in closed loop with a Proportional-Integrated Derivative (PID) and Proportional-Integrated (PD) controllers then this last are compared the results of the same examples using MATLAB/Simulink; we found that the PID method gives the better performance, such as rapidly speed response, settling time, compared to PD controller.

Keywords: parallel cable-based robots, geometric modeling, dynamic modeling, graphical user interface, open loop, PID/PD controllers

Procedia PDF Downloads 428
25570 The Influence of Project-Based Learning and Outcome-Based Education: Interior Design Tertiary Students in Focus

Authors: Omneya Messallam

Abstract:

Technology has been developed dramatically in most of the educational disciplines. For instance, digital rendering subject, which is being taught in both Interior and Architecture fields, is witnessing almost annually updated software versions. A lot of students and educators argued that there will be no need for manual rendering techniques to be learned. Therefore, the Interior Design Visual Presentation 1 course (ID133) has been chosen from the first level of the Interior Design (ID) undergraduate program, as it has been taught for six years continually. This time frame will facilitate sound observation and critical analysis of the use of appropriate teaching methodologies. Furthermore, the researcher believes in the high value of the manual rendering techniques. The course objectives are: to define the basic visual rendering principles, to recall theories and uses of various types of colours and hatches, to raise the learners’ awareness of the value of studying manual render techniques, and to prepare them to present their work professionally. The students are female Arab learners aged between 17 and 20. At the outset of the course, the majority of them demonstrated negative attitude, lacking both motivation and confidence in manual rendering skills. This paper is a reflective appraisal of deploying two student-centred teaching pedagogies which are: Project-based learning (PBL) and Outcome-based education (OBE) on ID133 students. This research aims of developing some teaching strategies to enhance the quality of teaching in this given course over an academic semester. The outcome of this research emphasized the positive influence of applying such educational methods on improving the quality of students’ manual rendering skills in terms of: materials, textiles, textures, lighting, and shade and shadow. Furthermore, it greatly motivated the students and raised the awareness of the importance of learning the manual rendering techniques.

Keywords: project-based learning, outcome-based education, visual presentation, manual render, personal competences

Procedia PDF Downloads 142
25569 Recent Nano technological Advancements in Antimicrobial Edible Films for Food Packaging: A Review

Authors: Raana Babadi Fathipour

Abstract:

Researchers are now focusing on sustainable advancements in active packaging systems to meet the growing consumer demand for high-quality food with Eco-friendly packaging. One significant advancement in this area is the inclusion of antimicrobial agents in bio-polymer-based edible films, which effectively inhibit or kill pathogenic/spoilage microbes that can contaminate food. This technology also helps reduce undesirable flavors caused by active compounds directly incorporated into the food. To further enhance the efficiency of antimicrobial bio-based packaging systems, Nano technological concepts such as bio-nano composites and Nano encapsulation systems have been applied. This review examines the current state and applications of antimicrobial biodegradable films in the food packaging industry, while also highlighting ongoing research on the use of nanotechnology to develop innovative bio-based packaging systems.

Keywords: active packaging, antimicrobial edible films, bioactive agents, biopolymers, bio-nanocomposites

Procedia PDF Downloads 52
25568 Diffusion Adaptation Strategies for Distributed Estimation Based on the Family of Affine Projection Algorithms

Authors: Mohammad Shams Esfand Abadi, Mohammad Ranjbar, Reza Ebrahimpour

Abstract:

This work presents the distributed processing solution problem in a diffusion network based on the adapt then combine (ATC) and combine then adapt (CTA)selective partial update normalized least mean squares (SPU-NLMS) algorithms. Also, we extend this approach to dynamic selection affine projection algorithm (DS-APA) and ATC-DS-APA and CTA-DS-APA are established. The purpose of ATC-SPU-NLMS and CTA-SPU-NLMS algorithm is to reduce the computational complexity by updating the selected blocks of weight coefficients at every iteration. In CTA-DS-APA and ATC-DS-APA, the number of the input vectors is selected dynamically. Diffusion cooperation strategies have been shown to provide good performance based on these algorithms. The good performance of introduced algorithm is illustrated with various experimental results.

Keywords: selective partial update, affine projection, dynamic selection, diffusion, adaptive distributed networks

Procedia PDF Downloads 681
25567 Horizontal-Vertical and Enhanced-Unicast Interconnect Testing Techniques for Network-on-Chip

Authors: Mahdiar Hosseinghadiry, Razali Ismail, F. Fotovati

Abstract:

One of the most important and challenging tasks in testing network-on-chip based system-on-chips (NoC based SoCs) is to verify the communication entity. It is important because of its usage for transferring both data packets and test patterns for intellectual properties (IPs) during normal and test mode. Hence, ensuring of NoC reliability is required for reliable IPs functionality and testing. On the other hand, it is challenging due to the required time to test it and the way of transferring test patterns from the tester to the NoC components. In this paper, two testing techniques for mesh-based NoC interconnections are proposed. The first one is based on one-by-one testing and the second one divides NoC interconnects into three parts, horizontal links of switches in even columns, horizontal links of switches in odd columns and all vertical. A design for testability (DFT) architecture is represented to send test patterns directly to each switch under test and also support the proposed testing techniques by providing a loopback path in each switch. The simulation results shows the second proposed testing mechanism outperforms in terms of test time because this method test all the interconnects in only three phases, independent to the number of existed interconnects in the network, while test time of other methods are highly dependent to the number of switches and interconnects in the NoC.

Keywords: on chip, interconnection testing, horizontal-vertical testing, enhanced unicast

Procedia PDF Downloads 533
25566 Performance Based Seismic Retrofit of Masonry Infiled Reinforced Concrete Frames Using Passive Energy Dissipation Devices

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.

Keywords: energy methods, masonry infilled frame, near-field earthquakes, seismic protection, supplemental damping devices

Procedia PDF Downloads 278
25565 Robust Medical Image Watermarking based on Contourlet and Extraction Using ICA

Authors: S. Saju, G. Thirugnanam

Abstract:

In this paper, a medical image watermarking algorithm based on contourlet is proposed. Medical image watermarking is a special subcategory of image watermarking in the sense that images have special requirements. Watermarked medical images should not differ perceptually from their original counterparts because clinical reading of images must not be affected. Watermarking techniques based on wavelet transform are reported in many literatures but robustness and security using contourlet are better when compared to wavelet transform. The main challenge in exploring geometry in images comes from the discrete nature of the data. In this paper, original image is decomposed to two level using contourlet and the watermark is embedded in the resultant sub-bands. Sub-band selection is based on the value of Peak Signal to Noise Ratio (PSNR) that is calculated between watermarked and original image. To extract the watermark, Kernel ICA is used and it has a novel characteristic is that it does not require the transformation process to extract the watermark. Simulation results show that proposed scheme is robust against attacks such as Salt and Pepper noise, Median filtering and rotation. The performance measures like PSNR and Similarity measure are evaluated and compared with Discrete Wavelet Transform (DWT) to prove the robustness of the scheme. Simulations are carried out using Matlab Software.

Keywords: digital watermarking, independent component analysis, wavelet transform, contourlet

Procedia PDF Downloads 511
25564 The Restrictions of the Householder’s ‘Double Two-Thirds Principles’ in Decision-Making for Elevators Addition to Existing Condominium

Authors: Haifeng Shi, Kun Song, Yili Zhao

Abstract:

In China, with the extensive promotion of the ‘aging in place’ pension policy as the background, most of the elders will choose to remain in their current homes and communities, finding out of preference or necessity that they will need to remodel their homes to fit their changing needs. This generation elder born in the 1960s to 1970s almost live in the same form of housing-condominium built from 1982 to 2012. Based on the survey of existing multi-family housing, especially in Tianjin, it is found that the current ‘double two-thirds principles’ is becoming the threshold for modification to existing house, particularly in the project of elevators addition to existing condominium (built from 1982 to 2016 without elevators below 6 floors according to the previous building code). Firstly, this article concludes the local policies of elevator addition nationwide, most of which has determined the importance and necessity of the community-based self-organization principle in the operation of the elevator addition. Secondly, by comparing the three existing community management systems (owners' congress, property management system and community committee) in instances, find that the community-based ‘two-thirds’ principle is not conducive to implement for multi-owned property renovation in the community or common accessibility modification in the building. However, analysis the property and other community management related laws, pointing out the shortcomings of the existing community-based ‘two-thirds’ decision-making norms. The analyzation showed that the unit-based and ‘100% principle’ method is more capable of common accessibility in the condominium in China. Differing from existing laws, the unit-based principle will be effective for the process of decision-making and ‘100% principle’ will protect closely profit-related householders for condominium modification in the multi-owned area. These three aspects of the analysis suggest that the establishment of the unit-based self-organization mechanism is a preferred and inevitable method to solve the problem of elevators addition to the existing condominium in China.

Keywords: aging in place, condominium, modification, multi own

Procedia PDF Downloads 135
25563 Analysis and Modeling of Graphene-Based Percolative Strain Sensor

Authors: Heming Yao

Abstract:

Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.

Keywords: graphene, gauge factor, percolative transport, strain sensor

Procedia PDF Downloads 401
25562 Molecular Communication Noise Effect Analysis of Diffusion-Based Channel for Considering Minimum-Shift Keying and Molecular Shift Keying Modulations

Authors: A. Azari, S. S. K. Seyyedi

Abstract:

One of the unaddressed and open challenges in the nano-networking is the characteristics of noise. The previous analysis, however, has concentrated on end-to-end communication model with no separate modelings for propagation channel and noise. By considering a separate signal propagation and noise model, the design and implementation of an optimum receiver will be much easier. In this paper, we justify consideration of a separate additive Gaussian noise model of a nano-communication system based on the molecular communication channel for which are applicable for MSK and MOSK modulation schemes. The presented noise analysis is based on the Brownian motion process, and advection molecular statistics, where the received random signal has a probability density function whose mean is equal to the mean number of the received molecules. Finally, the justification of received signal magnitude being uncorrelated with additive non-stationary white noise is provided.

Keywords: molecular, noise, diffusion, channel

Procedia PDF Downloads 259
25561 Analyzing Test Data Generation Techniques Using Evolutionary Algorithms

Authors: Arslan Ellahi, Syed Amjad Hussain

Abstract:

Software Testing is a vital process in software development life cycle. We can attain the quality of software after passing it through software testing phase. We have tried to find out automatic test data generation techniques that are a key research area of software testing to achieve test automation that can eventually decrease testing time. In this paper, we review some of the approaches presented in the literature which use evolutionary search based algorithms like Genetic Algorithm, Particle Swarm Optimization (PSO), etc. to validate the test data generation process. We also look into the quality of test data generation which increases or decreases the efficiency of testing. We have proposed test data generation techniques for model-based testing. We have worked on tuning and fitness function of PSO algorithm.

Keywords: search based, evolutionary algorithm, particle swarm optimization, genetic algorithm, test data generation

Procedia PDF Downloads 166
25560 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis

Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin

Abstract:

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis

Procedia PDF Downloads 175
25559 Drying Shrinkage of Magnesium Silicate Hydrate Gel Cements

Authors: T. Zhang, X. Liang, M. Lorin, C. Cheeseman, L. J. Vandeperre

Abstract:

Cracks were observed when the magnesium silicate hydrate gel cement (prepared by 40% MgO/ 60% silica fume) was dried. This drying cracking is believed to be caused when unbound water evaporates from the binder. The shrinkage upon forced drying to 200 °C of mortars made up from a reactive magnesium oxide, silica fume and sand was measured using dilatometry. The magnitude of the drying shrinkage was found to decrease when more sand or less water was added to the mortars and can be as low as 0.16% for a mortar containing 60 wt% sand and a water to cement ratio of 0.5, which is of a similar order of magnitude as observed in Portland cement based mortars and concretes. A simple geometrical interpretation based on packing of the particles in the mortar can explain the observed drying shrinkages and based on this analysis the drying shrinkage of the hydration products at zero added solid is estimated to be 7.3% after 7 days of curing.

Keywords: magnesium silicate hydrate, shrinkage, dilatometry, gel cements

Procedia PDF Downloads 288
25558 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 85