Search results for: Leptokurtic feature
290 Sympathetic Skin Response and Reaction Times in Chronic Autoimmune Thyroiditis; An Overlooked Electrodiagnostic Study
Authors: Oya Umit Yemisci, Nur Saracgil Cosar, Tubanur Ozturk Sisman, Selin Ozen
Abstract:
Chronic autoimmune thyroiditis (AIT) may result in a wide spectrum of reversible abnormalities in the neuromuscular function. Usually, proximal muscle-related symptoms and neuropathic findings such as mild axonal peripheral neuropathy have been reported. Sympathetic skin responses are useful in evaluating sudomotor activity of the unmyelinated sympathetic fibers of the autonomic nervous system. Neurocognitive impairment may also be a prominent feature of hypothyroidism, particularly in elderly patients. Electromyographic reaction times as a highly sensitive parameter provides. Objective data concerning cognitive and motor functions. The aim of this study was to evaluate peripheral nerve functions, sympathetic skin response and electroneuromyographic (ENMG) reaction times in euthyroid and subclinically hypothyroid patients with a diagnosis of AIT and compare to those of a control group. Thirty-five euthyroid, 19 patients with subclinical hypothyroidism and 35 age and sex-matched healthy subjects were included in the study. Motor and sensory nerve conduction studies, sympathetic skin responses recorded from hand and foot by stimulating contralateral median nerve and simple reaction times by stimulating tibial nerve and recording from extensor indicis proprius muscle were performed to all patients and control group. Only median nerve sensory conduction velocities of the forearm were slower in patients with AIT compared to the control group (p=0.019). Otherwise, nerve conduction studies and sympathetic skin responses showed no significant difference between the patients and the control group. However, reaction times were shorter in the healthy subjects compared to AIT patients. Prolongation in the reaction times may be considered as a parameter reflecting the alterations in the cognitive functions related to the primary disease process in AIT. Combining sympathetic skin responses with more quantitative tests such as cardiovascular tests and sudomotor axon reflex testing may allow us to determine higher rates of involvement of the autonomic nervous system in AIT.Keywords: sympathetic skin response, simple reaction time, chronic autoimmune thyroiditis
Procedia PDF Downloads 150289 Thermoelectric Blanket for Aiding the Treatment of Cerebral Hypoxia and Other Related Conditions
Authors: Sarayu Vanga, Jorge Galeano-Cabral, Kaya Wei
Abstract:
Cerebral hypoxia refers to a condition in which there is a decrease in oxygen supply to the brain. Patients suffering from this condition experience a decrease in their body temperature. While there isn't any cure to treat cerebral hypoxia as of date, certain procedures are utilized to help aid in the treatment of the condition. Regulating the body temperature is an example of one of those procedures. Hypoxia is well known to reduce the body temperature of mammals, although the neural origins of this response remain uncertain. In order to speed recovery from this condition, it is necessary to maintain a stable body temperature. In this study, we present an approach to regulating body temperature for patients who suffer from cerebral hypoxia or other similar conditions. After a thorough literature study, we propose the use of thermoelectric blankets, which are temperature-controlled thermal blankets based on thermoelectric devices. These blankets are capable of heating up and cooling down the patient to stabilize body temperature. This feature is possible through the reversible effect that thermoelectric devices offer while behaving as a thermal sensor, and it is an effective way to stabilize temperature. Thermoelectricity is the direct conversion of thermal to electrical energy and vice versa. This effect is now known as the Seebeck effect, and it is characterized by the Seebeck coefficient. In such a configuration, the device has cooling and heating sides with temperatures that can be interchanged by simply switching the direction of the current input in the system. This design integrates various aspects, including a humidifier, ventilation machine, IV-administered medication, air conditioning, circulation device, and a body temperature regulation system. The proposed design includes thermocouples that will trigger the blanket to increase or decrease a set temperature through a medical temperature sensor. Additionally, the proposed design allows an efficient way to control fluctuations in body temperature while being cost-friendly, with an expected cost of 150 dollars. We are currently working on developing a prototype of the design to collect thermal and electrical data under different conditions and also intend to perform an optimization analysis to improve the design even further. While this proposal was developed for treating cerebral hypoxia, it can also aid in the treatment of other related conditions, as fluctuations in body temperature appear to be a common symptom that patients have for many illnesses.Keywords: body temperature regulation, cerebral hypoxia, thermoelectric, blanket design
Procedia PDF Downloads 161288 Superordinated Control for Increasing Feed-in Capacity and Improving Power Quality in Low Voltage Distribution Grids
Authors: Markus Meyer, Bastian Maucher, Rolf Witzmann
Abstract:
The ever increasing amount of distributed generation in low voltage distribution grids (mainly PV and micro-CHP) can lead to reverse load flows from low to medium/high voltage levels at times of high feed-in. Reverse load flow leads to rising voltages that may even exceed the limits specified in the grid codes. Furthermore, the share of electrical loads connected to low voltage distribution grids via switched power supplies continuously increases. In combination with inverter-based feed-in, this results in high harmonic levels reducing overall power quality. Especially high levels of third-order harmonic currents can lead to neutral conductor overload, which is even more critical if lines with reduced neutral conductor section areas are used. This paper illustrates a possible concept for smart grids in order to increase the feed-in capacity, improve power quality and to ensure safe operation of low voltage distribution grids at all times. The key feature of the concept is a hierarchically structured control strategy that is run on a superordinated controller, which is connected to several distributed grid analyzers and inverters via broad band powerline (BPL). The strategy is devised to ensure both quick response time as well as the technically and economically reasonable use of the available inverters in the grid (PV-inverters, batteries, stepless line voltage regulators). These inverters are provided with standard features for voltage control, e.g. voltage dependent reactive power control. In addition they can receive reactive power set points transmitted by the superordinated controller. To further improve power quality, the inverters are capable of active harmonic filtering, as well as voltage balancing, whereas the latter is primarily done by the stepless line voltage regulators. By additionally connecting the superordinated controller to the control center of the grid operator, supervisory control and data acquisition capabilities for the low voltage distribution grid are enabled, which allows easy monitoring and manual input. Such a low voltage distribution grid can also be used as a virtual power plant.Keywords: distributed generation, distribution grid, power quality, smart grid, virtual power plant, voltage control
Procedia PDF Downloads 267287 The Ratio of Second to Fourth Digit Length Correlates with Cardiorespiratory Fitness in Male College Students Men but Not in Female
Authors: Cheng-Chen Hsu
Abstract:
Background: The ratio of the length of the second finger (index finger, 2D) to the fourth finger (ring finger, 4D) (2D:4D) is a putative marker of prenatal hormones. A low 2D:4D ratio is related to high prenatal testosterone (PT) levels. Physiological research has suggested that a low 2D:4D ratio is correlated with high sports ability. Aim: To examine the association between cardiorespiratory fitness and 2D:4D. Methods: Assessment of 2D:4D; Images of hands were collected from participants using a computer scanner. Hands were placed lightly on the surface of the plate. Image analysis was performed using Image-Pro Plus 5.0 software. Feature points were marked at the tip of the finger and at the center of the proximal crease on the second and fourth digits. Actual measurement was carried out automatically, 2D:4D was calculated by dividing 2nd by 4th digit length. YMCA 3-min Step Test; The test involves stepping up and down at a rate of 24 steps/min for 3 min; a tape recording of the correct cadence (96 beats/min) is played to assist the participant in keeping the correct pace. Following the step test, the participant immediately sits down and, within 5 s, the tester starts counting the pulse for 1 min. The score for the test, the total 1-min postexercise heart rate, reflects the heart’s ability to recover quickly. Statistical Analysis ; Pearson’s correlation (r) was used for assessing the relationship between age, physical measurements, one-minute heart rate after YMCA 3-minute step test (HR) and 2D:4D. An independent-sample t-test was used for determining possible differences in HR between subjects with low and high values of 2D:4D. All statistical analyses were carried out with SPSS 18 for Window. All P-values were two-tailed at P = 0.05, if not reported otherwise. Results: A median split by 2D:4D was applied, resulting in a high and a low group. One-minute heart rate after YMCA 3-minute step test was significantly difference between groups of male right-hand 2D:4D (p = 0.024). However, no difference in left-hand 2D:4D values between groups in male, and no digit ratio difference between groups in female. Conclusion: The results showed that cardiopulmonary fitness is related to right 2D:4D, only in men. We argue that prenatal testosterone may have an effect on cardiorespiratory fitness in male but not in female.Keywords: college students, digit ratio, finger, step test, fitness
Procedia PDF Downloads 275286 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics
Authors: Mariya Dayana, Budhmal Jain
Abstract:
Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d
Procedia PDF Downloads 163285 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 290284 Comparative Coverage Analysis of Football and Other Sports by the Leading English Newspapers of India during FIFA World Cup 2014
Authors: Rajender Lal, Seema Kaushik
Abstract:
The FIFA World Cup, often simply called the World Cup, is an international association football competition contested by the senior men's national teams of the members of Fédération Internationale de Football Association (FIFA), the sport's global governing body. The championship has been awarded every four years since the inaugural tournament in 1930, except in 1942 and 1946 when it was not held because of the Second World War. Its 20th edition took place in Brazil from 12 June to 13 July 2014, which was won by Germany. The World Cup is the most widely viewed and followed sporting event in the world, exceeding even the Olympic Games; the cumulative audience of all matches of the 2006 FIFA World Cup was estimated to be 26.29 billion with an estimated 715.1 million people watching the final match, a ninth of the entire population of the planet. General-interest newspapers typically publish news articles and feature articles on national and international news as well as local news. The news includes political events and personalities, business and finance, crime, severe weather, and natural disasters; health and medicine, science, and technology; sports; and entertainment, society, food and cooking, clothing and home fashion, and the arts. It became curiosity to investigate that how much coverage is given to this most widely viewed international event as compared to other sports in India. Hence, the present study was conducted with the aim of examining the comparative coverage of FIFA World Cup 2014 and other sports in the four leading Newspapers of India including Hindustan Times, The Hindu, The Times of India, and The Tribune. Specific objectives were to measure the source of news, type of news items and the placement of news related to FIFA World Cup and other sports. Representative sample of ten editions each of the four English dailies was chosen for the purpose of the study. The analysis was based on the actual scanning of data from the representative sample of the dailies for the period of the competition. It can be concluded from the analysis that this event was given maximum coverage by the Hindustan Times while other sports were equally covered by The Hindu.Keywords: coverage analysis, FIFA World Cup 2014, Hindustan Times, the Hindu, The Times of India, The Tribune
Procedia PDF Downloads 286283 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 270282 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project
Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson
Abstract:
The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction
Procedia PDF Downloads 81281 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 333280 Effect of Labisia pumila var. alata with a Structured Exercise Program in Women with Polycystic Ovarian Syndrome
Authors: D. Maryama AG. Daud, Zuliana Bacho, Stephanie Chok, DG. Mashitah PG. Baharuddin, Mohd Hatta Tarmizi, Nathira Abdul Majeed, Helen Lasimbang
Abstract:
Lifestyle, physical activity, food intake, genetics and medication are contributing factors for people getting obese. Which in some of the obese people were a low or non-responder to exercise. And obesity is very common clinical feature in women affected by Polycystic Ovarian Syndrome (PCOS). Labisia pumila var. alata (LP) is a local herb which had been widely used by Malay women in treating menstrual irregularities, painful menstruation and postpartum well-being. Therefore, this study was carried out to investigate the effect of LP with a structured exercise program on anthropometric, body composition and physical fitness performance of PCOS patients. By using a single blind and parallel study design, where by subjects were assigned into a 16-wk structured exercise program (3 times a week) interventions; (LP and exercise; LPE, and exercise only; E). All subjects in the LPE group were prescribed 200mg LP; once a day, for 16 weeks. The training heart rate (HR) was monitored based on a percentage of the maximum HR (HRmax) achieved during submaximal exercise test that was conducted at wk-0 and wk-8. The progression of aerobic exercise intensity from 25–30 min at 60 – 65% HRmax during the first week to 45 min at 75–80% HRmax by the end of this study. Anthropometric (body weight, Wt; waist circumference, WC; and hip circumference, HC), body composition (fat mass, FM; percentage body fat, %BF; Fat Free Mass, FFM) and physical fitness performance (push up to failure, PU; 1-minute Sit Up, SU; and aerobic step test, PVO2max) were measured at wk-0, wk-4, wk-8, wk-12, and wk-16. This study found that LP does not have a significant effect on body composition, anthropometric and physical fitness performance of PCOS patients underwent a structured exercise program. It means LP does not improve exercise responses of PCOS patients towards anthropometric, body composition and physical fitness performance. The overall data shows exercise responses of PCOS patients is by increasing their aerobic endurance and muscle endurance performances, there is a significant reduction in FM, PBF, HC, and Wt significantly. Therefore, exercise program for PCOS patients have to focus on aerobic fitness, and muscle endurance.Keywords: polycystic ovarian syndrome, Labisia pumila var. alata, body composition, aerobic endurance, muscle endurance, anthropometric
Procedia PDF Downloads 208279 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135278 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor
Authors: Pranav Gulati, Isha Sharma
Abstract:
Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring
Procedia PDF Downloads 279277 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing
Authors: Abhay Asthana, Gyati Shilakari Asthana
Abstract:
It’s the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation was developed using component including polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) upto 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.Keywords: sustained biodegradation, wound healing, polymers, stability
Procedia PDF Downloads 332276 Synthesis and Preparation of Carbon Ferromagnetic Nanocontainers for Cancer Therapy
Authors: L. Szymanski, Z. Kolacinski, Z. Kamiński, G. Raniszewski, J. Fraczyk, L. Pietrzak
Abstract:
In the article the development and demonstration of method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nano containers. Methodology of the production carbon - ferromagnetic nanocontainers includes: the synthesis of carbon nanotubes, chemical and physical characterization, increasing the content of ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Folic acid is ligand of folate receptors which is overexpresion in tumor cells. The presence of ligand should ensure the specificity of the interaction between ferromagnetic nanocontainers and tumor cells. The chemical functionalization contains several step: oxidation reaction, transformation of carboxyl groups into more reactive ester or amide groups, incorporation of spacer molecule (linker), attaching folic acid. Activation of carboxylic groups was prepared with triazine coupling reagent (preparation of superactive ester attached on the nanocontainers). The spacer molecules were designed and synthesized. In order to ensure biocompatibillity of linkers they were built from amino acids or peptides. Spacer molecules were synthesized using the SPPS method. Synthesis was performed on 2-Chlorotrityl resin. The linker important feature is its length. Due to that fact synthesis of peptide linkers containing from 2 to 4 -Ala- residues was carried out. Independent synthesis of the conjugate of foilic acid with 6-aminocaproic acid was made. Final step of synthesis was connecting conjugat with spacer molecules and attaching it on the ferromagnetic nanocontainer surface. This article contains also information about special CVD and microvave plasma system to produce nanotubes and ferromagnetic nanocontainers. The first tests in the device for hyperthermal RF generator will be presented. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz.Keywords: synthesis of carbon nanotubes, hyperthermia, ligands, carbon nanotubes
Procedia PDF Downloads 286275 Morphological Differentiation and Temporal Variability in Essential Oil Yield and Composition among Origanum vulgare ssp. hirtum L., Origanum onites L. and Origanum x intercedens from Ikaria Island (Greece)
Authors: A.Assariotakis, P. Vahamidis, P. Tarantilis, G. Economou
Abstract:
Greece, due to its geographical location and the particular climatic conditions, presents high biodiversity of Medicinal and Aromatic Plants. Among them, the genus Origanum not only presents a wide distribution, but it also has great economic importance. After extensive surveys in Ikaria Island (Greece), 3 species of the genus Origanum were identified, namely, Origanum vulgare ssp. hirtum (Greek oregano), Origanum onites (Turkish oregano) and Origanum x intercedens (hybrid), a naturally occurring hybrid between O. hirtum and O. onites. The purpose of this study was to determine their morphological as well as their temporal variability in essential oil yield and composition under field conditions. For this reason, a plantation of each species was created using vegetative propagation and was established at the experimental field of the Agricultural University of Athens (A.U.A.). From the establishment year and for the following two years (3 years of observations), several observations were taken during each growing season with the purpose of identifying the morphological differences among the studied species. Each year collected plant (at bloom stage) material was air-dried at room temperature in the shade. The essential oil content was determined by hydrodistillation using a Clevenger-type apparatus. The chemical composition of essential oils was investigated by Gas Chromatography-Mass Spectrometry (GC – MS). Significant differences were observed among the three oregano species in terms of plant height, leaf size, inflorescence features, as well as concerning their biological cycle. O. intercedens inflorescence presented more similarities with O. hirtum than with O. onites. It was found that calyx morphology could serve as a clear distinction feature between O. intercedens and O. hirtum. The calyx in O. hirtum presents five isometric teeth whereas in O. intercedens two high and three shorter. Essential oil content was significantly affected by genotype and year. O. hirtum presented higher essential oil content than the other two species during the first year of cultivation, however during the second year the hybrid (O. intercedens) recorded the highest values. Carvacrol, p-cymene and γ-terpinene were the main essential oil constituents of the three studied species. In O. hirtum carvacrol content varied from 84,28 - 93,35%, in O. onites from 86,97 - 91,89%, whereas in O. intercedens it was recorded the highest carvacrol content, namely from 89,25 - 97,23%.Keywords: variability, oregano biotypes, essential oil, carvacrol
Procedia PDF Downloads 126274 Implementing Lesson Study in Qatari Mathematics Classroom: A Case Study of a New Experience for Teachers through IMPULS-QU Lesson Study Program
Authors: Areej Isam Barham
Abstract:
The implementation of Japanese lesson study approach in the mathematics classroom has been grown worldwide as a model of professional development for teachers. In Qatar, the implementation of IMPULS-QU lesson study program aimed to establish a robust organizational improvement model of professional development for mathematics teachers in Qatar schools. This study describes the implementation of a lesson study model at Al-Markhyia Independent Primary School through different stages; and discusses how the planning process, the research lesson, and the post discussion participates in providing teachers and researchers with a successful research lesson for teacher professional development. The research followed a case study approach in one mathematics classroom. Two teachers and one professional development specialist participated the planning process. One teacher conducted the research lesson study by introducing a problem solving related to the concept of the ‘Mean’ in a mathematics class, 21 students in grade 6 participated in solving the mathematic problem, 11 teachers, 4 professional development specialists, and 4 mathematics professors observed the research lesson. All previous participants except the students participated in a pre and post-lesson discussion within this research. This study followed a qualitative research approach by analyzing the collected data through different stages in the research lesson study. Observation, field notes, and semi-structured interviews conducted to collect data to achieve the research aims. One feature of this lesson study research is that this research describes the implementation for a lesson study as a new experience for one mathematics teacher and 21 students after 3 years of conducting IMPULS-QU project in Al-Markhyia school. The research describes various stages through the implementation of this lesson study model starting from the planning process and ending by the post discussion process. Findings of the study also address the impact of lesson study approach in teaching mathematics for the development of teachers from their point views. Results of the study show the benefits of using lesson study from the point views of participated teachers, theory perceptions about the essential features of lesson study, and their needs for future development. The discussion of the study addresses different features and issues related to the implementation of IMPULS-QU lesson study model in the mathematics classroom. In the light of the study, the research presents recommendations and suggestions for future professional development.Keywords: lesson study, mathematics education, mathematics teaching experience, teacher professional development
Procedia PDF Downloads 187273 Investigation of a Novel Dual Band Microstrip/Waveguide Hybrid Antenna Element
Authors: Raoudane Bouziyan, Kawser Mohammad Tawhid
Abstract:
Microstrip antennas are low in profile, light in weight, conformable in structure and are now developed for many applications. The main difficulty of the microstrip antenna is its narrow bandwidth. Several modern applications like satellite communications, remote sensing, and multi-function radar systems will find it useful if there is dual-band antenna operating from a single aperture. Some applications require covering both transmitting and receiving frequency bands which are spaced apart. Providing multiple antennas to handle multiple frequencies and polarizations becomes especially difficult if the available space is limited as with airborne platforms and submarine periscopes. Dual band operation can be realized from a single feed using slot loaded or stacked microstrip antenna or two separately fed antennas sharing a common aperture. The former design, when used in arrays, has certain limitations like complicated beam forming or diplexing network and difficulty to realize good radiation patterns at both the bands. The second technique provides more flexibility with separate feed system as beams in each frequency band can be controlled independently. Another desirable feature of a dual band antenna is easy adjustability of upper and lower frequency bands. This thesis presents investigation of a new dual-band antenna, which is a hybrid of microstrip and waveguide radiating elements. The low band radiator is a Shorted Annular Ring (SAR) microstrip antenna and the high band radiator is an aperture antenna. The hybrid antenna is realized by forming a waveguide radiator in the shorted region of the SAR microstrip antenna. It is shown that the upper to lower frequency ratio can be controlled by the proper choice of various dimensions and dielectric material. Operation in both linear and circular polarization is possible in either band. Moreover, both broadside and conical beams can be generated in either band from this antenna element. Finite Element Method based software, HFSS and Method of Moments based software, FEKO were employed to perform parametric studies of the proposed dual-band antenna. The antenna was not tested physically. Therefore, in most cases, both HFSS and FEKO were employed to corroborate the simulation results.Keywords: FEKO, HFSS, dual band, shorted annular ring patch
Procedia PDF Downloads 402272 Software User Experience Enhancement through Collaborative Design
Authors: Shan Wang, Fahad Alhathal, Daniel Hobson
Abstract:
User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023, aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight workshops with a diverse group of 11 individuals. Throughout these sessions, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.Keywords: user experiences, co-design, design process, knowledge management tool, user-centered design
Procedia PDF Downloads 68271 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells
Authors: Brahim Aissa
Abstract:
Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties
Procedia PDF Downloads 159270 Polyclonal IgG glycosylation in Patients with Pediatric Appendicitis
Authors: Dalma Dojcsák, Csaba Váradi, Flóra Farkas, Tamás Farkas, János Papp, Béla Viskolcz
Abstract:
Background: Appendicitis is a common acute inflammatory condition in both children and adults, but current laboratory markers such as C-reactive protein (CRP), white blood cell count (WBC), absolute neutrophil count (ANC), and red blood cell count (RNC) lack specificity in detecting appendicitis-related inflammation. N-glycosylation, an asparagine-linked glycosylation process, plays a vital role in cellular interactions, angiogenesis, immune response, and effector functions. Altered N-glycosylation impacts tumor growth and both acute and chronic inflammatory processes. IgG, the second most abundant glycoprotein in serum, shows altered glycosylation patterns during inflammation, suggesting that IgG glycan modifications may serve as potential biomarkers for appendicitis. Specifically, increased levels of agalactosylated IgG glycans are a known feature of various inflammatory conditions, potentially including appendicitis. Identifying pediatric appendicitis remains challenging due to the absence of specific biomarkers, which makes diagnosis reliant on clinical symptoms, imaging such as ultrasound, and nonspecific lab indicators (e.g., CRP, WBC, ANC). In this study, we analyzed the IgG derived N-glycome in pediatric patients with appendicitis compared with healthy controls. Methodology: The N-glycome was analyzed by high-performance liquid-chromatography combined with mass spectrometry. IgG was isolated from serum samples by Protein G column. The IgG derived glycans were released by enzymatic deglycosylation and fluorescent tags were attached to each glycan moiety, which made necessitates the sample clean-up for further reliable quantitation. Overall, 38 controls and 40 serum samples diagnosed with pediatric appendicitis were analyzed by HILIC-MS methods. Multivariate statistical tests were performed with area percentage under the peak data derived from the integrated peaks, which were obtained from the chromatograms. Conclusions: Our results represented the altered N-glycome of IgG in pediatric appendicitis is similar with other observations. The glycosylation pattern reported so far for IgG is characterized by decreased galactosylation and sialylation, and an increase in fucosylation.Keywords: N-glycosylation, liquid chromatography, mass spectrometry, inflammation, appendicitis, immunoglobulin G
Procedia PDF Downloads 16269 Assignment of Legal Personality to Robots: A Premature Meditation
Authors: Solomon Okorley
Abstract:
With the emergence of artificial intelligence, a proposition that has been made with increasing conviction is the need to assign legal personhood to robots. A major problem that arises when dealing with robots is the issue of liability: who do it hold liable when a robot causes harm? The suggestion to assign legal personality to robots has been made to aid in the assignment of liability. This paper contends that it is premature to assign legal personhood to robots. The paper employed the doctrinal and comparative research methodology. The paper first discusses the various theories that underpin the granting of legal personhood to juridical personalities to ascertain whether these theories can aid in the proposition to assign legal personhood to robots. These theories include fiction theory, aggregate theory, realist theory, and organism theory. Except for the aggregate theory, the fiction theory, the realist theory and the organism theory provide a good foundation to the proposal for legal personhood to be assigned to robots. The paper considers whether robots should be assigned legal personhood from a jurisprudential approach. The legal positivists assert that no metaphysical presuppositions are needed to determine who could be a legal person: the sole deciding factor is the engagement in legal relations and this prerequisite could be fulfilled by robots. However, rationalists, religionists and naturalists assert that the satisfaction of the metaphysical criteria is the basis of legal personality and since robots do not possess this feature, they cannot be assigned legal personhood. This differing perspective shows that the jurisprudential school of thought to which one belongs influences the decision whether to assign legal personhood to robots. The paper makes arguments for and against the assigning of legal personhood to robots. Assigning legal personhood to robots is necessary for the assigning of liability; and since robots are independent in their operation, they should be assigned legal personhood. However, it is argued that the degree of autonomy is insufficient. Robots do not understand legal obligations; they do not have a will of their own and the purported autonomy that they possess is an ‘imputed autonomy’. A crucial question to be asked is ‘whether it is desirable to confer legal personhood on robots’ and not ‘whether legal personhood should be assigned to robots’. This is due to the subjective nature of the responses to such a question as well as the peculiarities of countries in response to this question. The main argument in support of assigning legal personhood to robots is to aid in assigning liability. However, it is argued conferring legal personhood on robots is not the only way to deal with liability issues. Since any of the stakeholders involved with the robot system can be held liable for an accident, it is not desirable to assign legal personhood to robot. It is forecasted that in the epoch of strong artificial intelligence, granting robots legal personhood is plausible; however, in the current era, it is premature.Keywords: autonomy, legal personhood, premature, jurisprudential
Procedia PDF Downloads 70268 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid
Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov
Abstract:
Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics
Procedia PDF Downloads 83267 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods
Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz
Abstract:
Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure
Procedia PDF Downloads 81266 China and the Criminalization of Aggression. The Juxtaposition of Justice and the Maintenance of International Peace and Security
Authors: Elisabetta Baldassini
Abstract:
Responses to atrocities are always unique and context-dependent. They cannot be foretold nor easily prompted. However, the events of the twentieth century had set the scene for the international community to explore new and more robust systems in response to war atrocities, with the ultimate goal being the restoration and maintenance of peace and security. The outlawry of war and the attribution of individual liability for international crimes were two major landmarks that set the roots for the development of international criminal law. From the London Conference (1945) for the establishment of the first international military tribunal in Nuremberg to Rome at the inauguration of the first permanent international criminal court, the development of international criminal law has shaped in itself a fluctuating degree of tensions between justice and maintenance of international peace and security, the cardinal dichotomy of this article. The adoption of judicial measures to achieve peace indeed set justice as an essential feature at the heart of the new international system. Blackhole of this dichotomy is the crime of aggression. Aggression was at first the key component of a wide body of peace projects prosecuted under the charges of crimes against peace. However, the wide array of controversies around aggression mostly related to its definition, determination and the involvement of the Security Council silenced, partly, a degree of efforts and agreements. Notwithstanding the establishment of the International Criminal Court (ICC), jurisdiction over the crime of aggression was suspended until an agreement over the definition and the conditions for the Court’s exercise of jurisdiction was reached. Compromised over the crime was achieved in Kampala in 2010 and the Court’s jurisdiction over the crime of aggression was eventually activated on 17 July 2018. China has steadily supported the advancement of international criminal justice together with the establishment of a permanent international judicial body to prosecute grave crimes and has proactively participated at the various stages of the codification and development of the crime of aggression. However, China has also expressed systematic reservations and setbacks. With the use of primary and secondary sources, including semi-structured interviews, this research aims at analyzing the role that China has played throughout the substantive historical development of the crime of aggression, demonstrating a sharp inclination in the maintenance of international peace and security. Such state behavior seems to reflect national and international political mechanisms that gravitate around a distinct rationale that involves a share of culture and tradition.Keywords: maintenance of peace and security, cultural expression of justice, crime of aggression, China
Procedia PDF Downloads 228265 Implementing Critical Friends Groups in Schools
Authors: S. Odabasi Cimer, A. Cimer
Abstract:
Recently, the poor quality of education, low achieving students, low international exam performances and little or no effect of the education reforms on the teaching in the classrooms are the main problems of education discussed in Turkey. Research showed that the quality of an education system can not exceed the quality of its teachers and teaching. Therefore, in-service training (INSET) courses are important to improve teacher quality, thereby, the quality of education. However, according to the research conducted on the evaluation of the INSET courses in Turkey, they are not effective in improving the quality of teaching in the classroom. The main reason for this result is because INSET courses are conducted and delivered in limited time and presented theoretically, which does not meet the needs of teachers and as a result, the knowledge and skills taught are not used in the classrooms. Recently, developed countries have been using Critical Friends Groups (CFGs) successfully for the purpose of school-based training of teachers. CFGs are the learning groups which contain 6-10 teachers aimed at fostering their capacities to undertake instructional and personal improvement and schoolwide reform. CFGs have been recognized as a critical feature in school reform, improving teaching practice and improving student achievement. In addition, in the USA, teachers have named CFGs one of the most powerful professional development activities in which they have ever participated. Whereas, in Turkey, the concept is new. This study aimed to investigate the implications of application, evaluation, and promotion of CFGs which has the potential to contribute to teacher development and student learning in schools in Turkey. For this purpose, the study employed a qualitative approach and case study methodology to implement the model in high schools. The research was conducted in two schools and 13 teachers working in these schools participated. The study lasted two years and the data were collected through various data collection tools including interviews, meeting transcripts, questionnaires, portfolios, and diaries. The results of the study showed that CFGs contributed professional development of teachers and their students’ learning. It also contributed to a culture of collaborative work in schools. A number of barriers and challenges which prevent effective implementation were also determined.Keywords: critical friends group, education reform, science learning, teacher education
Procedia PDF Downloads 127264 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 25263 Software User Experience Enhancement through User-Centered Design and Co-design Approach
Authors: Shan Wang, Fahad Alhathal, Hari Subramanian
Abstract:
User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023 in the UK; it aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight co-design workshops with a diverse group of 11 individuals. Throughout these co-design workshops, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement within three insights. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.Keywords: user experiences design, user centered design, co-design approach, knowledge management tool
Procedia PDF Downloads 13262 A Study of Possible Approach to Facilitate Social Sustainability of Industrial Land Redevelopment-Led Urban Regeneration
Authors: Hung Hing Chan, Tai-Shan Hu
Abstract:
Kaohsiung has been an industrial city of Taiwan for over a hundred year. Consequently, there are several abandoned industrial lands left when the process of deindustrialization has started, resulting in the decay of the adjacent urban communities. These industrial lands, which are brownfields that are potentially or already contaminated by hazardous substances, have created social injustice to the surrounding communities. The redevelopments of industrial lands bring a sustainable development to the communities, while the redevelopments can be in different forms, depending on the natural conditions. This research studies the possible approaches to facilitate social sustainability of urban regeneration resulted from the industrial land redevelopment projects, which has always been ignored. The aim of the research is to find out the best western practices of brownfield redevelopment to facilitate social aspect of sustainable urban regeneration and make a contribution to the industrial land redevelopment of Taiwan. The research is conducted via literature review and case study. Industrial land redevelopment has been a social focus in the blighted communities to promote urban regeneration after the post-industrial age. The tendency of this kind of redevelopment is towards constructing the built environment, as a result the environmental and economic aspect of sustainability of the redeveloped industrial land will be boosted, while the social aspect will not be necessarily better since the local communities affected are rarely engaged in the decision-making process and inadequate resource allocation to the projects is not guaranteed. To ensure the improvement of social sustainability is reached, the recommendations of this research, such as civic engagement, a formation of dedicated brownfield regeneration agency and resource allocation to employ brownfield process manager and to strategic communication, should be incorporated into the real practices of industrial land-led urban regeneration. Besides, the case study also shows that the social sustainability of industrial land-led urban regeneration can be promoted by (1) upholding the local feature and public participation in the regeneration process, (2) allocating resources and enforcing responsibility system, and (3) assuring financial resource for the urban regeneration projects and residents. Subsequent research will involve in-depth interviews with the chiefs of the village of related communities in Kaohsiung and questionnaire with the community members to comprehend their opinions regarding social sustainability, aiming at evaluating the social sustainability and finding out which kind of redevelopment project tends to support the social dimension of sustainable development more.Keywords: brownfield, industrial land, redevelopment, social sustainability, urban regeneration
Procedia PDF Downloads 219261 Using Corpora in Semantic Studies of English Adjectives
Authors: Oxana Lukoshus
Abstract:
The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies
Procedia PDF Downloads 315