Search results for: structural and statistical pattern recognition
8409 Parallel PRBS Generation and Parallel BER Tester for 8-Gbps On-chip Interconnection Testing
Authors: Zhao Bin, Yan Dan Lei
Abstract:
In this paper, a multi-pattern parallel PRBS generator and a dedicated parallel BER tester is proposed for the 8-Gbps On-chip interconnection testing. A unique full-parallel PRBS checker is also proposed. The proposed design, together with the custom-designed high-speed parallel-to-serial and the serial-to-parallel circuit, will be used to test different on-chip interconnection transceivers. The design is implemented in TSMC 28nm CMOS technology with working voltage at 1.0 V. The serial to parallel ratio is 8:1 so the parallel PRBS generation and BER Tester can be run at lower speed.Keywords: PRBS, BER, high speed, generator
Procedia PDF Downloads 7728408 Role of Vitamin-D in Reducing Need for Supplemental Oxygen Among COVID-19 Patients
Authors: Anita Bajpai, Sarah Duan, Ashlee Erskine, Shehzein Khan, Raymond Kramer
Abstract:
Introduction: This research focuses on exploring the beneficial effects if any, of Vitamin-D in reducing the need for supplemental oxygen among hospitalized COVID-19 patients. Two questions are investigated – Q1)Doeshaving a healthy level of baselineVitamin-D 25-OH (≥ 30ng/ml) help,andQ2) does administering Vitamin-D therapy after-the-factduring inpatient hospitalization help? Methods/Study Design: This is a comprehensive, retrospective, observational study of all inpatients at RUHS from March through December 2020 who tested positive for COVID-19 based on real-time reverse transcriptase–polymerase chain reaction assay of nasal and pharyngeal swabs and rapid assay antigen test. To address Q1, we looked atall N1=182 patients whose baseline plasma Vitamin-D 25-OH was known and who needed supplemental oxygen. Of this, a total of 121 patients had a healthy Vitamin-D level of ≥30 ng/mlwhile the remaining 61 patients had low or borderline (≤ 29.9ng/ml)level. Similarly, for Q2, we looked at a total of N2=893 patients who were given supplemental oxygen, of which713 were not given Vitamin-D and 180 were given Vitamin-D therapy. The numerical value of the maximum amount of oxygen flow rate(dependent variable) administered was recorded for each patient. The mean values and associated standard deviations for each group were calculated. Thesetwo sets of independent data served as the basis for independent, two-sample t-Test statistical analysis. To be accommodative of any reasonable benefitof Vitamin-D, ap-value of 0.10(α< 10%) was set as the cutoff point for statistical significance. Results: Given the large sample sizes, the calculated statistical power for both our studies exceeded the customary norm of 80% or better (β< 0.2). For Q1, the mean value for maximumoxygen flow rate for the group with healthybaseline level of Vitamin-D was 8.6 L/min vs.12.6L/min for those with low or borderline levels, yielding a p-value of 0.07 (p < 0.10) with the conclusion that those with a healthy level of baseline Vitamin-D needed statistically significant lower levels of supplemental oxygen. ForQ2, the mean value for a maximum oxygen flow rate for those not administered Vitamin-Dwas 12.5 L/min vs.12.8L/min for those given Vitamin-D, yielding a p-valueof 0.87 (p > 0.10). We thereforeconcludedthat there was no statistically significant difference in the use of oxygen therapy between those who were or were not administered Vitamin-D after-the-fact in the hospital. Discussion/Conclusion: We found that patients who had healthy levels of Vitamin-D at baseline needed statistically significant lower levels of supplemental oxygen. Vitamin-D is well documented, including in a recent article in the Lancet, for its anti-inflammatory role as an adjuvant in the regulation of cytokines and immune cells. Interestingly, we found no statistically significant advantage for giving Vitamin-D to hospitalized patients. It may be a case of “too little too late”. A randomized clinical trial reported in JAMA also did not find any reduction in hospital stay of patients given Vitamin-D. Such conclusions come with a caveat that any delayed marginal benefits may not have materialized promptly in the presence of a significant inflammatory condition. Since Vitamin-D is a low-cost, low-risk option, it may still be useful on an inpatient basis until more definitive findings are established.Keywords: COVID-19, vitamin-D, supplemental oxygen, vitamin-D in primary care
Procedia PDF Downloads 1578407 Practice Educators' Perspective: Placement Challenges in Social Work Education in England
Authors: Yuet Wah Echo Yeung
Abstract:
Practice learning is an important component of social work education. Practice educators are charged with the responsibility to support and enable learning while students are on placement. They also play a key role in teaching students to integrate theory and practice, as well as assessing their performance. Current literature highlights the structural factors that make it difficult for practice educators to create a positive learning environment for students. Practice educators find it difficult to give sufficient attention to their students because of the lack of workload relief, the increasing emphasis on managerialism and bureaucratisation, and a range of competing organisational and professional demands. This paper reports the challenges practice educators face and how they manage these challenges in this context. Semi-structured face-to-face interviews were conducted with thirteen practice educators who support students in statutory and voluntary social care settings in the Northwest of England. Interviews were conducted between April and July 2017 and each interview lasted about 40 minutes. All interviews were recorded and transcribed. All practice educators are experienced social work practitioners with practice experience ranging from 6 to 42 years. On average they have acted as practice educators for 13 years and all together have supported 386 students. Our findings reveal that apart from the structural factors that impact how practice educators perform their roles, they also faced other challenges when supporting students on placement. They include difficulty in engaging resistant students, complexity in managing power dynamics in the context of practice learning, and managing the dilemmas of fostering a positive relationship with students whilst giving critical feedback. Suggestions to enhance the practice educators’ role include support from organisations and social work teams; effective communication with university tutors, and a forum for practice educators to share good practice and discuss placement issues.Keywords: social work education, placement challenges, practice educator, practice learning
Procedia PDF Downloads 1958406 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 3828405 The Role of Context in Interpreting Emotional Body Language in Robots
Authors: Jekaterina Novikova, Leon Watts
Abstract:
In the emerging world of human-robot interaction, people and robots will interact socially in real-world situations. This paper presents the results of an experimental study probing the interaction between situational context and emotional body language in robots. 34 people rated video clips of robots performing expressive behaviours in different situational contexts both for emotional expressivity on Valence-Arousal-Dominance dimensions and by selecting a specific emotional term from a list of suggestions. Results showed that a contextual information enhanced a recognition of emotional body language of a robot, although it did not override emotional signals provided by robot expressions. Results are discussed in terms of design guidelines on how an emotional body language of a robot can be used by roboticists developing social robots.Keywords: social robotics, non-verbal communication, situational context, artificial emotions, body language
Procedia PDF Downloads 2908404 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam
Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck
Abstract:
The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam
Procedia PDF Downloads 2498403 Influence of Organizational Culture on Frequency of Disputes in Commercial Projects in Egypt: A Contractor’s Perspective
Authors: Omneya N. Mekhaimer, Elkhayam M. Dorra, A. Samer Ezeldin
Abstract:
Over the recent decades, studies on organizational culture have gained global attention in the business management literature, where it has been established that the cultural factors embedded in the organization have an implicit yet significant influence on the organization’s success. Unlike other industries, the construction industry is widely known to be operating in a dynamic and adversarial nature; considering the unique characteristics it denotes, thereby the level of disputes has propagated in the construction industry throughout the years. In the late 1990s, the International Council for Research and Innovation in Building and Construction (CIB) created a Task Group (TG-23), which later evolved in 2006 into a Working Commission W112, with a strategic objective to promote research in investigating the role and impact of culture in the construction industry worldwide. To that end, this paper aims to study the influence of organizational culture in the contractor’s organization on the frequency of disputes caused between the owner and the contractor that occur in commercial projects based in Egypt. This objective is achieved by using a quantitative approach through a survey questionnaire to explore the dominant cultural attributes that exist in the contractor’s organization based on the Competing Value Framework (CVF) theory, which classifies organizational culture into four main cultural types: (1) clan, (2) adhocracy, (3) market, and (4) hierarchy. Accordingly, the collected data are statistically analyzed using Statistical Package for Social Sciences (SPSS 28) software, whereby a correlation analysis using Pearson Correlation is carried out to assess the relationship between these variables and their statistical significance using the p-value. The results show that there is an influence of organizational culture attributes on the frequency of disputes whereby market culture is identified to be the most dominant organizational culture that is currently practiced in contractor’s organization, which consequently contributes to increasing the frequency of disputes in commercial projects. These findings suggest that alternative management practices should be adopted rather than the existing ones with an aim to minimize dispute occurrence.Keywords: construction projects, correlation analysis, disputes, Egypt, organizational culture
Procedia PDF Downloads 1138402 Application of Directed Acyclic Graphs for Threat Identification Based on Ontologies
Authors: Arun Prabhakar
Abstract:
Threat modeling is an important activity carried out in the initial stages of the development lifecycle that helps in building proactive security measures in the product. Though there are many techniques and tools available today, one of the common challenges with the traditional methods is the lack of a systematic approach in identifying security threats. The proposed solution describes an organized model by defining ontologies that help in building patterns to enumerate threats. The concepts of graph theory are applied to build the pattern for discovering threats for any given scenario. This graph-based solution also brings in other benefits, making it a customizable and scalable model.Keywords: directed acyclic graph, ontology, patterns, threat identification, threat modeling
Procedia PDF Downloads 1438401 Vibration-Based Structural Health Monitoring of a 21-Story Building with Tuned Mass Damper in Seismic Zone
Authors: David Ugalde, Arturo Castillo, Leopoldo Breschi
Abstract:
The Tuned Mass Dampers (TMDs) are an effective system for mitigating vibrations in building structures. These dampers have traditionally focused on the protection of high-rise buildings against earthquakes and wind loads. The Camara Chilena de la Construction (CChC) building, built in 2018 in Santiago, Chile, is a 21-story RC wall building equipped with a 150-ton TMD and instrumented with six permanent accelerometers, offering an opportunity to monitor the dynamic response of this damped structure. This paper presents the system identification of the CChC building using power spectral density plots of ambient vibration and two seismic events (5.5 Mw and 6.7 Mw). Linear models of the building with and without the TMD are used to compute the theoretical natural periods through modal analysis and simulate the response of the building through response history analysis. Results show that natural periods obtained from both ambient vibrations and earthquake records are quite similar to the theoretical periods given by the modal analysis of the building model. Some of the experimental periods are noticeable by simple inspection of the earthquake records. The accelerometers in the first story better captured the modes related to the building podium while the upper accelerometers clearly captured the modes related to the tower. The earthquake simulation showed smaller accelerations in the model with TMD that are similar to that measured by the accelerometers. It is concluded that the system identification through power spectral density shows consistency with the expected dynamic properties. The structural health monitoring of the CChC building confirms the advantages of seismic protection technologies such as TMDs in seismic prone areas.Keywords: system identification, tuned mass damper, wall buildings, seismic protection
Procedia PDF Downloads 1318400 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 768399 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 768398 An Empirical Study of the Moderation Effects of Commitment, Trust, and Relationship Value in the Relation of Goods and Services Related to Business to Business Brand Images on Customer Loyalty
Authors: Jorge Luis Morales Romero, Enrique Murillo Othón
Abstract:
Business to business (B2B) relationships generally go beyond a purely profit-based result, with firms seeking to maintain a relationship for many years because a breakup or getting a new supplier can be very costly. Therefore, identifying the factors which determine a successful relationship in the long term is of great interest to companies. That is why their reputation and the brand image that customers have of them are among the main factors that can achieve a successful relationship; Because of the positive effect which is driven by the client’s loyalty. Additionally, the perception that a customer may have about a brand is different when it is related to goods or to services. Thereby, they create in their minds their own brand image of it based on the past experiences they have had; Thus, a positive relationship is established between goods-related brand image, service-related brand image, and customer loyalty. The present investigation examines the boundary conditions of said relationship by testing the moderating effects of trust, commitment, and relationship value in a B2B environment. All the variables were tested independently as moderators for service-related brand image/loyalty and for goods-related brand image/loyalty, as they are assumed to be separate variables. Survey data was collected through interviews with customers that have both a product-buying relationship and a service relationship with a global B2B brand of healthcare equipment operating in the Mexican healthcare market. Interviewed respondents were either the user or the purchasing manager and/or the responsible for the equipment maintenance for the customer organization. Hence, they were appropriate informants regarding the B2B relationship with this healthcare brand. The moderation models were estimated using the PROCESS macro for the Statistical Package for the Social Sciences Software (SPSS). Results show statistical evidence that both Relationship Value and Trust are significant moderators for the service-related brand image/loyalty relation but not significant for the goods-related brand/loyalty relation. On the other hand, Commitment results in a significant moderator for the goods-related brand/loyalty relation but is not significant for the service-related brand image/loyalty relation.Keywords: commitment, trust, relationship value, loyalty, B2B, moderator
Procedia PDF Downloads 988397 Effects of Intercropping Maize (Zea mays L.) with Jack Beans (Canavalia ensiformis L.) at Different Spacing and Weeding Regimes on Crops Productivity
Authors: Oluseun S. Oyelakin, Olalekan W. Olaniyi
Abstract:
A field experiment was conducted at Ido town in Ido Local Government Area of Oyo state, Nigeria to determine the effects of intercropping maize (Zea mays L.) with Jack bean (Canavalia ensiformis L.) at different spacing and weeding regimes on crops productivity. The treatments were 2 x 2 x 3 factorial arrangement involving two spatial crop arrangements. Spacing of 75 cm x 50 cm and 90 cm x 42 cm (41.667 cm) with two plants per stand resulted in plant population of approximately 53,000 plants/hectare. Also, Randomized Complete Block Design (RCBD) with two cropping patterns (sole and intercrop), three weeding regimes (weedy check, weeds once, and weed twice) with three replicates was used. Data were analyzed with SAS (Statistical Analysis System) and statistical means separated using Least Significant Difference (LSD) (P ≤ 0.05). Intercropping and crop spacing did not have significant influence on the growth parameters and yield parameters. The maize grain yield of 1.11 t/ha obtained under sole maize was comparable to 1.05 t/ha from maize/jack beans. Weeding regime significantly influenced growth and yields of maize in intercropping with Jack beans. Weeding twice resulted in significantly higher growth than that of the other weeding regimes. Plant height at 6 Weeks After Sowing (WAS) under weeding twice regime (3 and 6 WAS) was 83.9 cm which was significantly different from 67.75 cm and 53.47 cm for weeding once (3 WAS) and no weeding regimes respectively. Moreover, maize grain yield of 1.3 t/ha obtained from plots weeded twice was comparable to that of 1.23 t/ha from single weeding and both were significantly higher than 0.71 t/ha maize grain yield obtained from the no weeding control. The dry matter production of Jack beans reduced at some growth stages due to intercropping of maize with Jack beans though with no significance effect on the other growth parameters of the crop. There was no effect on the growth parameters of Jack beans in maize/jack beans intercrop based on cropping spacing while comparable growth and dry matter production in Jack beans were produced in maize/Jack beans mixture with single weeding.Keywords: crop spacing, intercropping, growth parameter, weeding regime, sole cropping, WAS, week after sowing
Procedia PDF Downloads 1498396 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction
Authors: Khaled Barkaoui
Abstract:
Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.Keywords: second language writing, Fluency, accuracy, complexity, longitudinal
Procedia PDF Downloads 1568395 Assessing the Danger Factors Correlated With Dental Fear: An Observational Study
Authors: Mimoza Canga, Irene Malagnino, Giulia Malagnino, Alketa Qafmolla, Ruzhdie Qafmolla, Vito Antonio Malagnino
Abstract:
The goal of the present study was to analyze the risk factors regarding dental fear. This observational study was conducted during the period of February 2020 - April 2022 in Albania. The sample was composed of 200 participants, of which 40% were males and 60% were females. The participants' age range varied from 35 to 75 years old. We divided them into four age groups: 35-45, 46-55, 56-65, and 66-75 years old. Statistical analysis was performed using IBM SPSS Statistics 23.0. Data were scrutinized by the Post Hoc LSD test in analysis of variance (ANOVA). The P ≤ 0.05 values were considered significant. Data analysis included Confidence Interval (95% CI). The prevailing age range in the sample was mostly from 55 to 65 years old, 35.6% of the patients. In all, 50% of the patients had extreme fear about the fact that the dentist may be infected with Covid-19, 12.2% of them had low dental fear, and 37.8% had extreme dental fear. However, data collected from the current study indicated that a large proportion of patients 49.5% of them had high dental fear regarding the dentist not respecting the quarantine due to COVID-19, in comparison with 37.2% of them who had low dental fear and 13.3% who had extreme dental fear. The present study confirmed that 22.2% of the participants had an extreme fear of poor hygiene practices of the dentist that have been associated with the transmission of COVID-19 infection, 57.8% had high dental fear, and 20% of them had low dental fear. The present study showed that 50% of the patients stated that another factor that causes extreme fear was that the patients feel pain after interventions in the oral cavity. Strong associations were observed between dental fear and pain 95% CI; 0.24-0.52, P-value ˂ .0001. The results of the present study confirmed strong associations between dental fear and the fact that the dentist may be infected with Covid-19 (95% CI; 0.46-0.70, P-value ˂ .0001). Similarly, the analysis of the present study demonstrated that there was a statistically significant correlation between dental fear and poor hygiene practices of the dentist with 95% CI; 0.82-1.02, P-value ˂ .0001. On the basis of our statistical data analysis, the dentist did not respect the quarantine due to COVID-19 having a significant impact on dental fear with a P-value of ˂ .0001. This study shows important risk factors that significantly increase dental fear.Keywords: Covid-19, dental fear, pain, past dreadful experiences
Procedia PDF Downloads 1448394 Risk of Heatstroke Occurring in Indoor Built Environment Determined with Nationwide Sports and Health Database and Meteorological Outdoor Data
Authors: Go Iwashita
Abstract:
The paper describes how the frequencies of heatstroke occurring in indoor built environment are related to the outdoor thermal environment with big statistical data. As the statistical accident data of heatstroke, the nationwide accident data were obtained from the National Agency for the Advancement of Sports and Health (NAASH) . The meteorological database of the Japanese Meteorological Agency supplied data about 1-hour average temperature, humidity, wind speed, solar radiation, and so forth. Each heatstroke data point from the NAASH database was linked to the meteorological data point acquired from the nearest meteorological station where the accident of heatstroke occurred. This analysis was performed for a 10-year period (2005–2014). During the 10-year period, 3,819 cases of heatstroke were reported in the NAASH database for the investigated secondary/high schools of the nine Japanese representative cities. Heatstroke most commonly occurred in the outdoor schoolyard at a wet-bulb globe temperature (WBGT) of 31°C and in the indoor gymnasium during athletic club activities at a WBGT > 31°C. The determined accident ratio (number of accidents during each club activity divided by the club’s population) in the gymnasium during the female badminton club activities was the highest. Although badminton is played in a gymnasium, these WBGT results show that the risk level during badminton under hot and humid conditions is equal to that of baseball or rugby played in the schoolyard. Except sports, the high risk of heatstroke was observed in schools houses during cultural activities. The risk level for indoor environment under hot and humid condition would be equal to that for outdoor environment based on the above results of WBGT. Therefore control measures against hot and humid indoor condition were needed as installing air conditions not only schools but also residences.Keywords: accidents in schools, club activity, gymnasium, heatstroke
Procedia PDF Downloads 2198393 Comparative Study of Water Quality Parameters in the Proximity of Various Landfills Sites in India
Authors: Abhishek N. Srivastava, Rahul Singh, Sumedha Chakma
Abstract:
The rapid urbanization in the developing countries is generating an enormous amount of waste leading to the creation of unregulated landfill sites at various places at its disposal. The liquid waste, known as leachate, produced from these landfills sites is severely affecting the surrounding water quality. The water quality in the proximity areas of the landfill is found affected by various physico-chemical parameters of leachate such as pH, alkalinity, total hardness, conductivity, chloride, total dissolved solids (TDS), total suspended solids (TSS), sulphate, nitrate, phosphate, fluoride, sodium and potassium, biological parameters such as biochemical oxygen demand (BOD), chemical oxygen demand (COD), Faecal coliform, and heavy metals such as cadmium (Cd), lead (Pb), iron (Fe), mercury (Hg), arsenic (As), cobalt (Co), manganese (Mn), zinc (Zn), copper (Cu), chromium (Cr), nickel (Ni). However, all these parameters are distributive in leachate that produced according to the nature of waste being dumped at various landfill sites, therefore, it becomes very difficult to predict the main responsible parameter of leachate for water quality contamination. The present study is endeavour the comparative analysis of the physical, chemical and biological parameters of various landfills in India viz. Okhla landfill, Ghazipur landfill, Bhalswa ladfill in NCR Delhi, Deonar landfill in Mumbai, Dhapa landfill in Kolkata and Kodungayaiyur landfill, Perungudi landfill in Chennai. The statistical analysis of the parameters was carried out using the Statistical Packages for the Social Sciences (SPSS) and LandSim 2.5 model to simulate the long term effect of various parameters on different time scale. Further, the uncertainties characterization of various input parameters has also been analysed using fuzzy alpha cut (FAC) technique to check the sensitivity of various water quality parameters at the proximity of numerous landfill sites. Finally, the study would help to suggest the best method for the prevention of pollution migration from the landfill sites on priority basis.Keywords: landfill leachate, water quality, LandSim, fuzzy alpha cut
Procedia PDF Downloads 1278392 Factors Associated with Hotel Employees’ Loyalty: A Case Study of Hotel Employees in Bangkok, Thailand
Authors: Kevin Wongleedee
Abstract:
This research paper was aimed to examine the reasons associated with hotel employees’ loyalty. This was a case study of 200 hotel employees in Bangkok, Thailand. The population of this study included all hotel employees who were working in Bangkok during January to March, 2014. Based on 200 respondents who answered the questionnaire, the data were complied by using SPSS. Mean and standard deviation were utilized in analyzing the data. The findings revealed that the average mean of importance was 4.40, with 0.7585 of standard deviation. Moreover, the mean average can be used to rank the level of importance from each factor as follows: 1) salary, service charge cut, and benefits, 2) career development and possible advancement, 3) freedom of working, thinking, and ability to use my initiative, 4) training opportunities, 5) social involvement and positive environment, 6) fair treatment in the workplace and fair evaluation of job performance, and 7) personal satisfaction, participation, and recognition.Keywords: hotel employees, loyalty, reasons, case study
Procedia PDF Downloads 4108391 Anatomical Survey for Text Pattern Detection
Abstract:
The ultimate aim of machine intelligence is to explore and materialize the human capabilities, one of which is the ability to detect various text objects within one or more images displayed on any canvas including prints, videos or electronic displays. Multimedia data has increased rapidly in past years. Textual information present in multimedia contains important information about the image/video content. However, it needs to technologically testify the commonly used human intelligence of detecting and differentiating the text within an image, for computers. Hence in this paper feature set based on anatomical study of human text detection system is proposed. Subsequent examination bears testimony to the fact that the features extracted proved instrumental to text detection.Keywords: biologically inspired vision, content based retrieval, document analysis, text extraction
Procedia PDF Downloads 4528390 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 2958389 Controlling Differential Settlement of Large Reservoir through Soil Structure Interaction Approach
Authors: Madhav Khadilkar
Abstract:
Construction of a large standby reservoir was required to provide secure water supply. The new reservoir was required to be constructed at the same location of an abandoned old open pond due to space constraints. Some investigations were carried out earlier to improvise and re-commission the existing pond. But due to a lack of quantified risk of settlement from voids in the underlying limestone, the shallow foundations were not found feasible. Since the reservoir was resting on hard strata for about three-quarter of plan area and one quarter was resting on soil underlying with limestone and considerably low subgrade modulus. Further investigations were carried out to ascertain the locations and extent of voids within the limestone. It was concluded that the risk due to lime dissolution was acceptably low, and the site was found geotechnically feasible. The hazard posed by limestone dissolution was addressed through the integrated structural and geotechnical analysis and design approach. Finite Element Analysis was carried out to quantify the stresses and differential settlement due to various probable loads and soil-structure interaction. Walls behaving as cantilever under operational loads were found undergoing in-plane bending and tensile forces due to soil-structure interaction. Sensitivity analysis for varying soil subgrade modulus was carried out to check the variation in the response of the structure and magnitude of stresses developed. The base slab was additionally checked for the loss of soil contact due to lime pocket formations at random locations. The expansion and contraction joints were planned to receive minimal additional forces due to differential settlement. The reservoir was designed to sustain the actions corresponding to allowable deformation limits per code, and geotechnical measures were proposed to achieve the soil parameters set in structural analysis.Keywords: differential settlement, limestone dissolution, reservoir, soil structure interaction
Procedia PDF Downloads 1618388 A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System
Authors: Arshia Aflaki, Hadis Karimipour, Anik Islam
Abstract:
Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent.Keywords: generative adversarial attack, deep reinforcement learning, deep learning, IIoT, generative adversarial networks, power system
Procedia PDF Downloads 518387 The Principle Probabilities of Space-Distance Resolution for a Monostatic Radar and Realization in Cylindrical Array
Authors: Anatoly D. Pluzhnikov, Elena N. Pribludova, Alexander G. Ryndyk
Abstract:
In conjunction with the problem of the target selection on a clutter background, the analysis of the scanning rate influence on the spatial-temporal signal structure, the generalized multivariate correlation function and the quality of the resolution with the increase pulse repetition frequency is made. The possibility of the object space-distance resolution, which is conditioned by the range-to-angle conversion with an increased scanning rate, is substantiated. The calculations for the real cylindrical array at high scanning rate are presented. The high scanning rate let to get the signal to noise improvement of the order of 10 dB for the space-time signal processing.Keywords: antenna pattern, array, signal processing, spatial resolution
Procedia PDF Downloads 1848386 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 2988385 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 818384 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole
Authors: Basavaraj R. Endigeri, S. G. Sarganachari
Abstract:
Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.Keywords: finite element method, optimization, stress concentration factor, auxiliary holes
Procedia PDF Downloads 4568383 Paper-Based Colorimetric Sensor Utilizing Peroxidase-Mimicking Magnetic Nanoparticles Conjugated with Aptamers
Authors: Min-Ah Woo, Min-Cheol Lim, Hyun-Joo Chang, Sung-Wook Choi
Abstract:
We developed a paper-based colorimetric sensor utilizing magnetic nanoparticles conjugated with aptamers (MNP-Apts) against E. coli O157:H7. The MNP-Apts were applied to a test sample solution containing the target cells, and the solution was simply dropped onto PVDF (polyvinylidene difluoride) membrane. The membrane moves the sample radially to form the sample spots of different compounds as concentric rings, thus the MNP-Apts on the membrane enabled specific recognition of the target cells through a color ring generation by MNP-promoted colorimetric reaction of TMB (3,3',5,5'-tetramethylbenzidine) and H2O2. This method could be applied to rapidly and visually detect various bacterial pathogens in less than 1 h without cell culturing.Keywords: aptamer, colorimetric sensor, E. coli O157:H7, magnetic nanoparticle, polyvinylidene difluoride
Procedia PDF Downloads 4538382 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 3468381 Analysis of Formation Methods of Range Profiles for an X-Band Coastal Surveillance Radar
Authors: Nguyen Van Loi, Le Thanh Son, Tran Trung Kien
Abstract:
The paper deals with the problem of the formation of range profiles (RPs) for an X-band coastal surveillance radar. Two popular methods, the difference operator method, and the window-based method, are reviewed and analyzed via two tests with different datasets. The test results show that although the original window-based method achieves a better performance than the difference operator method, it has three main drawbacks that are the use of 3 or 4 peaks of an RP for creating the windows, the extension of the window size using the power sum of three adjacent cells in the left and the right sides of the windows and the same threshold applied for all types of vessels to finish the formation process of RPs. These drawbacks lead to inaccurate RPs due to the low signal-to-clutter ratio. Therefore, some suggestions are proposed to improve the original window-based method.Keywords: range profile, difference operator method, window-based method, automatic target recognition
Procedia PDF Downloads 1298380 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 163