Search results for: feature normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1648

Search results for: feature normalization

268 The Ratio of Second to Fourth Digit Length Correlates with Cardiorespiratory Fitness in Male College Students Men but Not in Female

Authors: Cheng-Chen Hsu

Abstract:

Background: The ratio of the length of the second finger (index finger, 2D) to the fourth finger (ring finger, 4D) (2D:4D) is a putative marker of prenatal hormones. A low 2D:4D ratio is related to high prenatal testosterone (PT) levels. Physiological research has suggested that a low 2D:4D ratio is correlated with high sports ability. Aim: To examine the association between cardiorespiratory fitness and 2D:4D. Methods: Assessment of 2D:4D; Images of hands were collected from participants using a computer scanner. Hands were placed lightly on the surface of the plate. Image analysis was performed using Image-Pro Plus 5.0 software. Feature points were marked at the tip of the finger and at the center of the proximal crease on the second and fourth digits. Actual measurement was carried out automatically, 2D:4D was calculated by dividing 2nd by 4th digit length. YMCA 3-min Step Test; The test involves stepping up and down at a rate of 24 steps/min for 3 min; a tape recording of the correct cadence (96 beats/min) is played to assist the participant in keeping the correct pace. Following the step test, the participant immediately sits down and, within 5 s, the tester starts counting the pulse for 1 min. The score for the test, the total 1-min postexercise heart rate, reflects the heart’s ability to recover quickly. Statistical Analysis ; Pearson’s correlation (r) was used for assessing the relationship between age, physical measurements, one-minute heart rate after YMCA 3-minute step test (HR) and 2D:4D. An independent-sample t-test was used for determining possible differences in HR between subjects with low and high values of 2D:4D. All statistical analyses were carried out with SPSS 18 for Window. All P-values were two-tailed at P = 0.05, if not reported otherwise. Results: A median split by 2D:4D was applied, resulting in a high and a low group. One-minute heart rate after YMCA 3-minute step test was significantly difference between groups of male right-hand 2D:4D (p = 0.024). However, no difference in left-hand 2D:4D values between groups in male, and no digit ratio difference between groups in female. Conclusion: The results showed that cardiopulmonary fitness is related to right 2D:4D, only in men. We argue that prenatal testosterone may have an effect on cardiorespiratory fitness in male but not in female.

Keywords: college students, digit ratio, finger, step test, fitness

Procedia PDF Downloads 248
267 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics

Authors: Mariya Dayana, Budhmal Jain

Abstract:

Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.

Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d

Procedia PDF Downloads 141
266 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 266
265 Comparative Coverage Analysis of Football and Other Sports by the Leading English Newspapers of India during FIFA World Cup 2014

Authors: Rajender Lal, Seema Kaushik

Abstract:

The FIFA World Cup, often simply called the World Cup, is an international association football competition contested by the senior men's national teams of the members of Fédération Internationale de Football Association (FIFA), the sport's global governing body. The championship has been awarded every four years since the inaugural tournament in 1930, except in 1942 and 1946 when it was not held because of the Second World War. Its 20th edition took place in Brazil from 12 June to 13 July 2014, which was won by Germany. The World Cup is the most widely viewed and followed sporting event in the world, exceeding even the Olympic Games; the cumulative audience of all matches of the 2006 FIFA World Cup was estimated to be 26.29 billion with an estimated 715.1 million people watching the final match, a ninth of the entire population of the planet. General-interest newspapers typically publish news articles and feature articles on national and international news as well as local news. The news includes political events and personalities, business and finance, crime, severe weather, and natural disasters; health and medicine, science, and technology; sports; and entertainment, society, food and cooking, clothing and home fashion, and the arts. It became curiosity to investigate that how much coverage is given to this most widely viewed international event as compared to other sports in India. Hence, the present study was conducted with the aim of examining the comparative coverage of FIFA World Cup 2014 and other sports in the four leading Newspapers of India including Hindustan Times, The Hindu, The Times of India, and The Tribune. Specific objectives were to measure the source of news, type of news items and the placement of news related to FIFA World Cup and other sports. Representative sample of ten editions each of the four English dailies was chosen for the purpose of the study. The analysis was based on the actual scanning of data from the representative sample of the dailies for the period of the competition. It can be concluded from the analysis that this event was given maximum coverage by the Hindustan Times while other sports were equally covered by The Hindu.

Keywords: coverage analysis, FIFA World Cup 2014, Hindustan Times, the Hindu, The Times of India, The Tribune

Procedia PDF Downloads 264
264 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 249
263 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction

Procedia PDF Downloads 57
262 Effect of Labisia pumila var. alata with a Structured Exercise Program in Women with Polycystic Ovarian Syndrome

Authors: D. Maryama AG. Daud, Zuliana Bacho, Stephanie Chok, DG. Mashitah PG. Baharuddin, Mohd Hatta Tarmizi, Nathira Abdul Majeed, Helen Lasimbang

Abstract:

Lifestyle, physical activity, food intake, genetics and medication are contributing factors for people getting obese. Which in some of the obese people were a low or non-responder to exercise. And obesity is very common clinical feature in women affected by Polycystic Ovarian Syndrome (PCOS). Labisia pumila var. alata (LP) is a local herb which had been widely used by Malay women in treating menstrual irregularities, painful menstruation and postpartum well-being. Therefore, this study was carried out to investigate the effect of LP with a structured exercise program on anthropometric, body composition and physical fitness performance of PCOS patients. By using a single blind and parallel study design, where by subjects were assigned into a 16-wk structured exercise program (3 times a week) interventions; (LP and exercise; LPE, and exercise only; E). All subjects in the LPE group were prescribed 200mg LP; once a day, for 16 weeks. The training heart rate (HR) was monitored based on a percentage of the maximum HR (HRmax) achieved during submaximal exercise test that was conducted at wk-0 and wk-8. The progression of aerobic exercise intensity from 25–30 min at 60 – 65% HRmax during the first week to 45 min at 75–80% HRmax by the end of this study. Anthropometric (body weight, Wt; waist circumference, WC; and hip circumference, HC), body composition (fat mass, FM; percentage body fat, %BF; Fat Free Mass, FFM) and physical fitness performance (push up to failure, PU; 1-minute Sit Up, SU; and aerobic step test, PVO2max) were measured at wk-0, wk-4, wk-8, wk-12, and wk-16. This study found that LP does not have a significant effect on body composition, anthropometric and physical fitness performance of PCOS patients underwent a structured exercise program. It means LP does not improve exercise responses of PCOS patients towards anthropometric, body composition and physical fitness performance. The overall data shows exercise responses of PCOS patients is by increasing their aerobic endurance and muscle endurance performances, there is a significant reduction in FM, PBF, HC, and Wt significantly. Therefore, exercise program for PCOS patients have to focus on aerobic fitness, and muscle endurance.

Keywords: polycystic ovarian syndrome, Labisia pumila var. alata, body composition, aerobic endurance, muscle endurance, anthropometric

Procedia PDF Downloads 186
261 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 115
260 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 249
259 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing

Authors: Abhay Asthana, Gyati Shilakari Asthana

Abstract:

It’s the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation was developed using component including polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) upto 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.

Keywords: sustained biodegradation, wound healing, polymers, stability

Procedia PDF Downloads 316
258 Synthesis and Preparation of Carbon Ferromagnetic Nanocontainers for Cancer Therapy

Authors: L. Szymanski, Z. Kolacinski, Z. Kamiński, G. Raniszewski, J. Fraczyk, L. Pietrzak

Abstract:

In the article the development and demonstration of method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nano containers. Methodology of the production carbon - ferromagnetic nanocontainers includes: the synthesis of carbon nanotubes, chemical and physical characterization, increasing the content of ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Folic acid is ligand of folate receptors which is overexpresion in tumor cells. The presence of ligand should ensure the specificity of the interaction between ferromagnetic nanocontainers and tumor cells. The chemical functionalization contains several step: oxidation reaction, transformation of carboxyl groups into more reactive ester or amide groups, incorporation of spacer molecule (linker), attaching folic acid. Activation of carboxylic groups was prepared with triazine coupling reagent (preparation of superactive ester attached on the nanocontainers). The spacer molecules were designed and synthesized. In order to ensure biocompatibillity of linkers they were built from amino acids or peptides. Spacer molecules were synthesized using the SPPS method. Synthesis was performed on 2-Chlorotrityl resin. The linker important feature is its length. Due to that fact synthesis of peptide linkers containing from 2 to 4 -Ala- residues was carried out. Independent synthesis of the conjugate of foilic acid with 6-aminocaproic acid was made. Final step of synthesis was connecting conjugat with spacer molecules and attaching it on the ferromagnetic nanocontainer surface. This article contains also information about special CVD and microvave plasma system to produce nanotubes and ferromagnetic nanocontainers. The first tests in the device for hyperthermal RF generator will be presented. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz.

Keywords: synthesis of carbon nanotubes, hyperthermia, ligands, carbon nanotubes

Procedia PDF Downloads 265
257 Morphological Differentiation and Temporal Variability in Essential Oil Yield and Composition among Origanum vulgare ssp. hirtum L., Origanum onites L. and Origanum x intercedens from Ikaria Island (Greece)

Authors: A.Assariotakis, P. Vahamidis, P. Tarantilis, G. Economou

Abstract:

Greece, due to its geographical location and the particular climatic conditions, presents high biodiversity of Medicinal and Aromatic Plants. Among them, the genus Origanum not only presents a wide distribution, but it also has great economic importance. After extensive surveys in Ikaria Island (Greece), 3 species of the genus Origanum were identified, namely, Origanum vulgare ssp. hirtum (Greek oregano), Origanum onites (Turkish oregano) and Origanum x intercedens (hybrid), a naturally occurring hybrid between O. hirtum and O. onites. The purpose of this study was to determine their morphological as well as their temporal variability in essential oil yield and composition under field conditions. For this reason, a plantation of each species was created using vegetative propagation and was established at the experimental field of the Agricultural University of Athens (A.U.A.). From the establishment year and for the following two years (3 years of observations), several observations were taken during each growing season with the purpose of identifying the morphological differences among the studied species. Each year collected plant (at bloom stage) material was air-dried at room temperature in the shade. The essential oil content was determined by hydrodistillation using a Clevenger-type apparatus. The chemical composition of essential oils was investigated by Gas Chromatography-Mass Spectrometry (GC – MS). Significant differences were observed among the three oregano species in terms of plant height, leaf size, inflorescence features, as well as concerning their biological cycle. O. intercedens inflorescence presented more similarities with O. hirtum than with O. onites. It was found that calyx morphology could serve as a clear distinction feature between O. intercedens and O. hirtum. The calyx in O. hirtum presents five isometric teeth whereas in O. intercedens two high and three shorter. Essential oil content was significantly affected by genotype and year. O. hirtum presented higher essential oil content than the other two species during the first year of cultivation, however during the second year the hybrid (O. intercedens) recorded the highest values. Carvacrol, p-cymene and γ-terpinene were the main essential oil constituents of the three studied species. In O. hirtum carvacrol content varied from 84,28 - 93,35%, in O. onites from 86,97 - 91,89%, whereas in O. intercedens it was recorded the highest carvacrol content, namely from 89,25 - 97,23%.

Keywords: variability, oregano biotypes, essential oil, carvacrol

Procedia PDF Downloads 109
256 Implementing Lesson Study in Qatari Mathematics Classroom: A Case Study of a New Experience for Teachers through IMPULS-QU Lesson Study Program

Authors: Areej Isam Barham

Abstract:

The implementation of Japanese lesson study approach in the mathematics classroom has been grown worldwide as a model of professional development for teachers. In Qatar, the implementation of IMPULS-QU lesson study program aimed to establish a robust organizational improvement model of professional development for mathematics teachers in Qatar schools. This study describes the implementation of a lesson study model at Al-Markhyia Independent Primary School through different stages; and discusses how the planning process, the research lesson, and the post discussion participates in providing teachers and researchers with a successful research lesson for teacher professional development. The research followed a case study approach in one mathematics classroom. Two teachers and one professional development specialist participated the planning process. One teacher conducted the research lesson study by introducing a problem solving related to the concept of the ‘Mean’ in a mathematics class, 21 students in grade 6 participated in solving the mathematic problem, 11 teachers, 4 professional development specialists, and 4 mathematics professors observed the research lesson. All previous participants except the students participated in a pre and post-lesson discussion within this research. This study followed a qualitative research approach by analyzing the collected data through different stages in the research lesson study. Observation, field notes, and semi-structured interviews conducted to collect data to achieve the research aims. One feature of this lesson study research is that this research describes the implementation for a lesson study as a new experience for one mathematics teacher and 21 students after 3 years of conducting IMPULS-QU project in Al-Markhyia school. The research describes various stages through the implementation of this lesson study model starting from the planning process and ending by the post discussion process. Findings of the study also address the impact of lesson study approach in teaching mathematics for the development of teachers from their point views. Results of the study show the benefits of using lesson study from the point views of participated teachers, theory perceptions about the essential features of lesson study, and their needs for future development. The discussion of the study addresses different features and issues related to the implementation of IMPULS-QU lesson study model in the mathematics classroom. In the light of the study, the research presents recommendations and suggestions for future professional development.

Keywords: lesson study, mathematics education, mathematics teaching experience, teacher professional development

Procedia PDF Downloads 160
255 Investigation of a Novel Dual Band Microstrip/Waveguide Hybrid Antenna Element

Authors: Raoudane Bouziyan, Kawser Mohammad Tawhid

Abstract:

Microstrip antennas are low in profile, light in weight, conformable in structure and are now developed for many applications. The main difficulty of the microstrip antenna is its narrow bandwidth. Several modern applications like satellite communications, remote sensing, and multi-function radar systems will find it useful if there is dual-band antenna operating from a single aperture. Some applications require covering both transmitting and receiving frequency bands which are spaced apart. Providing multiple antennas to handle multiple frequencies and polarizations becomes especially difficult if the available space is limited as with airborne platforms and submarine periscopes. Dual band operation can be realized from a single feed using slot loaded or stacked microstrip antenna or two separately fed antennas sharing a common aperture. The former design, when used in arrays, has certain limitations like complicated beam forming or diplexing network and difficulty to realize good radiation patterns at both the bands. The second technique provides more flexibility with separate feed system as beams in each frequency band can be controlled independently. Another desirable feature of a dual band antenna is easy adjustability of upper and lower frequency bands. This thesis presents investigation of a new dual-band antenna, which is a hybrid of microstrip and waveguide radiating elements. The low band radiator is a Shorted Annular Ring (SAR) microstrip antenna and the high band radiator is an aperture antenna. The hybrid antenna is realized by forming a waveguide radiator in the shorted region of the SAR microstrip antenna. It is shown that the upper to lower frequency ratio can be controlled by the proper choice of various dimensions and dielectric material. Operation in both linear and circular polarization is possible in either band. Moreover, both broadside and conical beams can be generated in either band from this antenna element. Finite Element Method based software, HFSS and Method of Moments based software, FEKO were employed to perform parametric studies of the proposed dual-band antenna. The antenna was not tested physically. Therefore, in most cases, both HFSS and FEKO were employed to corroborate the simulation results.

Keywords: FEKO, HFSS, dual band, shorted annular ring patch

Procedia PDF Downloads 378
254 Software User Experience Enhancement through Collaborative Design

Authors: Shan Wang, Fahad Alhathal, Daniel Hobson

Abstract:

User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023, aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight workshops with a diverse group of 11 individuals. Throughout these sessions, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.

Keywords: user experiences, co-design, design process, knowledge management tool, user-centered design

Procedia PDF Downloads 29
253 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells

Authors: Brahim Aissa

Abstract:

Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.

Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties

Procedia PDF Downloads 128
252 Assignment of Legal Personality to Robots: A Premature Meditation

Authors: Solomon Okorley

Abstract:

With the emergence of artificial intelligence, a proposition that has been made with increasing conviction is the need to assign legal personhood to robots. A major problem that arises when dealing with robots is the issue of liability: who do it hold liable when a robot causes harm? The suggestion to assign legal personality to robots has been made to aid in the assignment of liability. This paper contends that it is premature to assign legal personhood to robots. The paper employed the doctrinal and comparative research methodology. The paper first discusses the various theories that underpin the granting of legal personhood to juridical personalities to ascertain whether these theories can aid in the proposition to assign legal personhood to robots. These theories include fiction theory, aggregate theory, realist theory, and organism theory. Except for the aggregate theory, the fiction theory, the realist theory and the organism theory provide a good foundation to the proposal for legal personhood to be assigned to robots. The paper considers whether robots should be assigned legal personhood from a jurisprudential approach. The legal positivists assert that no metaphysical presuppositions are needed to determine who could be a legal person: the sole deciding factor is the engagement in legal relations and this prerequisite could be fulfilled by robots. However, rationalists, religionists and naturalists assert that the satisfaction of the metaphysical criteria is the basis of legal personality and since robots do not possess this feature, they cannot be assigned legal personhood. This differing perspective shows that the jurisprudential school of thought to which one belongs influences the decision whether to assign legal personhood to robots. The paper makes arguments for and against the assigning of legal personhood to robots. Assigning legal personhood to robots is necessary for the assigning of liability; and since robots are independent in their operation, they should be assigned legal personhood. However, it is argued that the degree of autonomy is insufficient. Robots do not understand legal obligations; they do not have a will of their own and the purported autonomy that they possess is an ‘imputed autonomy’. A crucial question to be asked is ‘whether it is desirable to confer legal personhood on robots’ and not ‘whether legal personhood should be assigned to robots’. This is due to the subjective nature of the responses to such a question as well as the peculiarities of countries in response to this question. The main argument in support of assigning legal personhood to robots is to aid in assigning liability. However, it is argued conferring legal personhood on robots is not the only way to deal with liability issues. Since any of the stakeholders involved with the robot system can be held liable for an accident, it is not desirable to assign legal personhood to robot. It is forecasted that in the epoch of strong artificial intelligence, granting robots legal personhood is plausible; however, in the current era, it is premature.

Keywords: autonomy, legal personhood, premature, jurisprudential

Procedia PDF Downloads 35
251 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid

Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov

Abstract:

Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.

Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics

Procedia PDF Downloads 58
250 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 56
249 China and the Criminalization of Aggression. The Juxtaposition of Justice and the Maintenance of International Peace and Security

Authors: Elisabetta Baldassini

Abstract:

Responses to atrocities are always unique and context-dependent. They cannot be foretold nor easily prompted. However, the events of the twentieth century had set the scene for the international community to explore new and more robust systems in response to war atrocities, with the ultimate goal being the restoration and maintenance of peace and security. The outlawry of war and the attribution of individual liability for international crimes were two major landmarks that set the roots for the development of international criminal law. From the London Conference (1945) for the establishment of the first international military tribunal in Nuremberg to Rome at the inauguration of the first permanent international criminal court, the development of international criminal law has shaped in itself a fluctuating degree of tensions between justice and maintenance of international peace and security, the cardinal dichotomy of this article. The adoption of judicial measures to achieve peace indeed set justice as an essential feature at the heart of the new international system. Blackhole of this dichotomy is the crime of aggression. Aggression was at first the key component of a wide body of peace projects prosecuted under the charges of crimes against peace. However, the wide array of controversies around aggression mostly related to its definition, determination and the involvement of the Security Council silenced, partly, a degree of efforts and agreements. Notwithstanding the establishment of the International Criminal Court (ICC), jurisdiction over the crime of aggression was suspended until an agreement over the definition and the conditions for the Court’s exercise of jurisdiction was reached. Compromised over the crime was achieved in Kampala in 2010 and the Court’s jurisdiction over the crime of aggression was eventually activated on 17 July 2018. China has steadily supported the advancement of international criminal justice together with the establishment of a permanent international judicial body to prosecute grave crimes and has proactively participated at the various stages of the codification and development of the crime of aggression. However, China has also expressed systematic reservations and setbacks. With the use of primary and secondary sources, including semi-structured interviews, this research aims at analyzing the role that China has played throughout the substantive historical development of the crime of aggression, demonstrating a sharp inclination in the maintenance of international peace and security. Such state behavior seems to reflect national and international political mechanisms that gravitate around a distinct rationale that involves a share of culture and tradition.

Keywords: maintenance of peace and security, cultural expression of justice, crime of aggression, China

Procedia PDF Downloads 201
248 Implementing Critical Friends Groups in Schools

Authors: S. Odabasi Cimer, A. Cimer

Abstract:

Recently, the poor quality of education, low achieving students, low international exam performances and little or no effect of the education reforms on the teaching in the classrooms are the main problems of education discussed in Turkey. Research showed that the quality of an education system can not exceed the quality of its teachers and teaching. Therefore, in-service training (INSET) courses are important to improve teacher quality, thereby, the quality of education. However, according to the research conducted on the evaluation of the INSET courses in Turkey, they are not effective in improving the quality of teaching in the classroom. The main reason for this result is because INSET courses are conducted and delivered in limited time and presented theoretically, which does not meet the needs of teachers and as a result, the knowledge and skills taught are not used in the classrooms. Recently, developed countries have been using Critical Friends Groups (CFGs) successfully for the purpose of school-based training of teachers. CFGs are the learning groups which contain 6-10 teachers aimed at fostering their capacities to undertake instructional and personal improvement and schoolwide reform. CFGs have been recognized as a critical feature in school reform, improving teaching practice and improving student achievement. In addition, in the USA, teachers have named CFGs one of the most powerful professional development activities in which they have ever participated. Whereas, in Turkey, the concept is new. This study aimed to investigate the implications of application, evaluation, and promotion of CFGs which has the potential to contribute to teacher development and student learning in schools in Turkey. For this purpose, the study employed a qualitative approach and case study methodology to implement the model in high schools. The research was conducted in two schools and 13 teachers working in these schools participated. The study lasted two years and the data were collected through various data collection tools including interviews, meeting transcripts, questionnaires, portfolios, and diaries. The results of the study showed that CFGs contributed professional development of teachers and their students’ learning. It also contributed to a culture of collaborative work in schools. A number of barriers and challenges which prevent effective implementation were also determined.

Keywords: critical friends group, education reform, science learning, teacher education

Procedia PDF Downloads 99
247 A Study of Possible Approach to Facilitate Social Sustainability of Industrial Land Redevelopment-Led Urban Regeneration

Authors: Hung Hing Chan, Tai-Shan Hu

Abstract:

Kaohsiung has been an industrial city of Taiwan for over a hundred year. Consequently, there are several abandoned industrial lands left when the process of deindustrialization has started, resulting in the decay of the adjacent urban communities. These industrial lands, which are brownfields that are potentially or already contaminated by hazardous substances, have created social injustice to the surrounding communities. The redevelopments of industrial lands bring a sustainable development to the communities, while the redevelopments can be in different forms, depending on the natural conditions. This research studies the possible approaches to facilitate social sustainability of urban regeneration resulted from the industrial land redevelopment projects, which has always been ignored. The aim of the research is to find out the best western practices of brownfield redevelopment to facilitate social aspect of sustainable urban regeneration and make a contribution to the industrial land redevelopment of Taiwan. The research is conducted via literature review and case study. Industrial land redevelopment has been a social focus in the blighted communities to promote urban regeneration after the post-industrial age. The tendency of this kind of redevelopment is towards constructing the built environment, as a result the environmental and economic aspect of sustainability of the redeveloped industrial land will be boosted, while the social aspect will not be necessarily better since the local communities affected are rarely engaged in the decision-making process and inadequate resource allocation to the projects is not guaranteed. To ensure the improvement of social sustainability is reached, the recommendations of this research, such as civic engagement, a formation of dedicated brownfield regeneration agency and resource allocation to employ brownfield process manager and to strategic communication, should be incorporated into the real practices of industrial land-led urban regeneration. Besides, the case study also shows that the social sustainability of industrial land-led urban regeneration can be promoted by (1) upholding the local feature and public participation in the regeneration process, (2) allocating resources and enforcing responsibility system, and (3) assuring financial resource for the urban regeneration projects and residents. Subsequent research will involve in-depth interviews with the chiefs of the village of related communities in Kaohsiung and questionnaire with the community members to comprehend their opinions regarding social sustainability, aiming at evaluating the social sustainability and finding out which kind of redevelopment project tends to support the social dimension of sustainable development more.

Keywords: brownfield, industrial land, redevelopment, social sustainability, urban regeneration

Procedia PDF Downloads 193
246 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 296
245 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 131
244 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers

Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang

Abstract:

Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.

Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors

Procedia PDF Downloads 101
243 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 59
242 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 31
241 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II

Authors: Heerak Banerjee, Sourov Roy

Abstract:

Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.

Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry

Procedia PDF Downloads 107
240 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 144
239 The ‘Accompanying Spouse Dependent Visa Status’: Challenges and Constraints Faced by Zimbabwean Immigrant Women in Integration into South Africa’s Formal Labour Market

Authors: Rujeko Samanthia Chimukuche

Abstract:

Introduction: Transboundary migration at both regional and continental levels has become the defining feature of the 21st century. The recent global migration crisis due to economic strife and war brings back to the fore an old age problem, but with fresh challenges. Migration and forced displacement are issues that require long-term solutions. In South Africa, for example, whilst much attention has been placed on xenophobic attacks and other issues at the nexus of immigrant and indigenous communities, the limited focus has been placed on the integration, specifically formal labour integration of immigrant communities and the gender inequalities that are prevalent. Despite noble efforts by South Africa, hosting several immigrants, several challenges arise in integrating the migrants into society as it is often difficult to harmonize the interests of indigenous communities and those of foreign nationals. This research study has aimed to fill in the gaps by analyzing how stringent immigration and visa regulations prevent skilled migrant women spouses from employment, which often results in several societal vices, including domestic abuse, minimum or no access to important services such as healthcare, education, social welfare among others. Methods: Using a qualitative approach, the study analyzed South Africa migration and labour policies in terms of mainstreaming the gender needs of skilled migrant women. Secondly, the study highlighted the migratory experiences and constraints of skilled Zimbabwean women migrant spouses in South Africa labour integration. The experiences of these women have shown the gender inequalities of the migratory policies. Thirdly, Zimbabwean women's opportunities and/or challenges in integration into the South African formal labour market were explored. Lastly, practical interventions to support the integration of skilled migrant women spouses into South Africa’s formal labour market were suggested. Findings: Key findings show that gender dynamics are pivotal in migration patterns and the mainstreaming of gender in migration policies. This study, therefore, contributed to the fields of gender and migration by examining ways in which gender rights of skilled migrant women spouses can be incorporated in labour integration policy making.

Keywords: accompanying spouse visa, gender-migration, labour-integration, Zimbabwean women

Procedia PDF Downloads 95