Search results for: computer application
1146 Study of Open Spaces in Urban Residential Clusters in India
Authors: Renuka G. Oka
Abstract:
From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.Keywords: open spaces, physical and social determinants, residential clusters, use patterns
Procedia PDF Downloads 1501145 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology
Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth
Abstract:
The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery
Procedia PDF Downloads 4521144 Energy Efficiency Measures in Canada’s Iron and Steel Industry
Authors: A. Talaei, M. Ahiduzzaman, A. Kumar
Abstract:
In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation
Procedia PDF Downloads 3991143 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 371142 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure
Authors: Inga Žukovaitė
Abstract:
This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model
Procedia PDF Downloads 691141 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 751140 A Minimally Invasive Approach Using Bio-Miniatures Implant System for Full Arch Rehabilitation
Authors: Omid Allan
Abstract:
The advent of ultra-narrow diameter implants initially offered an alternative to wider conventional implants. However, their design limitations have restricted their applicability primarily to overdentures and cement-retained fixed prostheses, often with unpredictable long-term outcomes. The introduction of the new Miniature Implants has revolutionized the field of implant dentistry, leading to a more streamlined approach. The utilization of Miniature Implants has emerged as a promising alternative to the traditional approach that entails the traumatic sequential bone drilling procedures and the use of conventional implants for full and partial arch restorations. The innovative "BioMiniatures Implant System serves as a groundbreaking bridge connecting mini implants with standard implant systems. This system allows practitioners to harness the advantages of ultra-small implants, enabling minimally invasive insertion and facilitating the application of fixed screw-retained prostheses, which were only available to conventional wider implant systems. This approach streamlines full and partial arch rehabilitation with minimal or even no bone drilling, significantly reducing surgical risks and complications for clinicians while minimizing patient morbidity. The ultra-narrow diameter and self-advancing features of these implants eliminate the need for invasive and technically complex procedures such as bone augmentation and guided bone regeneration (GBR), particularly in cases involving thin alveolar ridges. Furthermore, the absence of a microcap between the implant and abutment eliminates the potential for micro-leakage and micro-pumping effects, effectively mitigating the risk of marginal bone loss and future peri-implantitis. The cumulative experience of restoring over 50 full and partial arch edentulous cases with this system has yielded an outstanding success rate exceeding 97%. The long-term success with a stable marginal bone level in the study firmly establishes these implants as a dependable alternative to conventional implants, especially for full arch rehabilitation cases. Full arch rehabilitation with these implants holds the promise of providing a simplified solution for edentulous patients who typically present with atrophic narrow alveolar ridges, eliminating the need for extensive GBR and bone augmentation to restore their dentition with fixed prostheses.Keywords: mini-implant, biominiatures, miniature implants, minimally invasive dentistry, full arch rehabilitation
Procedia PDF Downloads 771139 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling
Authors: Havva Tutar Kahraman, Erol Pehlivan
Abstract:
In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.Keywords: adsorption, anion exchange resin, chromium, kinetics
Procedia PDF Downloads 2601138 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1601137 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal
Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota
Abstract:
This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.Keywords: temperature, precipitation, water discharge, water balance, global warming
Procedia PDF Downloads 3451136 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 801135 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation
Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís
Abstract:
Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser
Procedia PDF Downloads 1991134 Efficacy of Ergonomics Ankle Support on Squatting Pushing Skills during the Second Stage of Labor
Authors: Yu-Ching Lin, Meei-Ling Gau, Ghi-Hwei Kao, Hung-Chang Lee
Abstract:
Objective: To compare the pushing experiences and birth outcomes of three different pushing positions during the second stage of labor. The three positions were: semi-recumbent, squatting, and squatting with the aid of ergonomically designed ankle supports. Methods: A randomized controlled trial was conducted at a regional teaching hospital in northern Taiwan. Data were collected from 168 primiparous women in their 38th to 42nd gestational week. None of the participants received epidural analgesia during labor and all were free of pregnancy and labor-related complications. Intervention: During labor, after full cervical dilation and when the fetal head had descended to at least the +1 station and had turned to the occiput anterior position, the experimental group was asked to push in the squatting position while wearing the ergonomically designed ankle supports; comparison group A was asked to push in the squatting position without the use of these supports; and comparison group B was asked to push in a standard semi-recumbent position. Measures: The participants completed a demographic and obstetrics datasheet, the Short Form McGill Pain Questionnaire (MPQ-SF), and the Labor Pushing Experience scale within 4-hours postpartum. Conclusion: In terms of delivery time, the duration between the start of pushing to crowning for the experimental group (squatting with ankle supports) averaged 25.52 minutes less (F =6.02, p< .05) than the time for comparison group B (semi-recumbent). Furthermore, the duration between the start of pushing to infant birth averaged 25.21 minutes less for the experimental group than for comparison group B (F =6.14, p< .05). Moreover, the experimental group had a lower average VAS pain score (5.05±3.22) than comparison group B and the average McGill pain score for the experimental group was lower than both comparison groups (F=18.12, p< .001). In summary, the participants in the group that delivered from a squatting position with ankle supports had better labor pushing experiences than their peers in the comparison groups. Results: In comparison to both unsupported squatting and semi-recumbent pushing, squatting with the aid of ergonomically designed ankle supports reduced pushing times, ameliorated labor pain, and improved the pushing experience. Clinical application and suggestion: The squatting with ankle-support intervention introduced in the present study may significantly reduce tiredness and difficulties in maintaining balance as well as increase pushing efficiency. Thus, this intervention may reduce the caring needs of women during the second stage of labor. This intervention may be introduced in midwifery education programs and in clinical practice as a method to improve the care of women during the second stage of labor.Keywords: second stage of labor, pushing, squatting with ankle supports, squatting
Procedia PDF Downloads 2751133 The Mental Health of Indigenous People During the COVID-19 Pandemic: A Scoping Review
Authors: Suzanne L. Stewart, Sarah J. Ponton, Mikaela D. Gabriel, Roy Strebel, Xinyi Lu
Abstract:
Indigenous Peoples have faced unique barriers to accessing and receiving culturally safe and appropriate mental health care while also facing daunting rates of mental health diagnoses and comorbidities. Indigenous researchers and clinicians have well established the connection of the current mental health issues in Indigenous communities as a direct result of colonization by way of intergenerational trauma throughout Canada’s colonial history. Such mental health barriers and challenges have become exacerbated during the COVID-19 pandemic. Throughout the pandemic, access to mental health, cultural, ceremonial, and community services were severely impacted and restricted; however, it is these same cultural activities and community resources that are key to supporting Indigenous mental health from a traditional and community-based perspective. This research employed a unique combination of a thorough, analytical scoping review of the existent mental health literature of Indigenous mental health in the COVID-19 pandemic, alongside narrative interviews employing an oral storytelling tradition methodology with key community informants that provide comprehensive cultural services to the Indigenous community of Toronto, as well as across Canada. These key informant interviews provided a wealth of insights into virtual transitions of Indigenous care and mental health support; intersections of historical underfunding and current financial navigation in technology infrastructure; accessibility and connection with Indigenous youth in remote locations; as well as maintaining community involvement and traditional practices in a current pandemic. Both the scoping review and narrative interviews were meticulously analyzed for overarching narrative themes to best explore the extent of the literature on Indigenous mental health and services during COVID-19; identify gaps in this literature; identify barriers and supports for the Indigenous community, and explore the intersection of community and cultural impacts to mental health. Themes of the scoping review included: Historical Context; Challenges in Culturally-Based Services; and Strengths in Culturally-Based Services. Meta themes across narrative interviews included: Virtual Transitions; Financial Support for Indigenous Services; Health Service Delivery & Wellbeing; and Culture & Community Connection. The results of this scoping review and narrative interviews provide wide application and contribution to the mental health literature, as well as recommendations for policy, service provision, autonomy in Indigenous health and wellbeing, and crucial insights into the present and enduring mental health needs of Indigenous Peoples throughout the COVID-19 pandemic.Keywords: indigenous community services, indigenous mental health, indigenous scoping review, indigenous peoples and Covid-19
Procedia PDF Downloads 2431132 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 2681131 User Experience in Relation to Eye Tracking Behaviour in VR Gallery
Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski
Abstract:
Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication
Procedia PDF Downloads 451130 Hierarchical Zeolites as Potential Carriers of Curcumin
Authors: Ewelina Musielak, Agnieszka Feliczak-Guzik, Izabela Nowak
Abstract:
Based on the latest data, it is expected that the substances of therapeutic interest used will be as natural as possible. Therefore, active substances with the highest possible efficacy and low toxicity are sought. Among natural substances with therapeutic effects, those of plant origin stand out. Curcumin isolated from the Curcuma longa plant has proven to be particularly important from a medical point of view. Due to its ability to regulate many important transcription factors, cytokines, and protein kinases, curcumin has found use as an anti-inflammatory, antioxidant, antiproliferative, antiangiogenic, and anticancer agent. The unfavorable properties of curcumin, such as low solubility, poor bioavailability, and rapid degradation under neutral or alkaline pH conditions, limit its clinical application. These problems can be solved by combining curcumin with suitable carriers such as hierarchical zeolites. This is a new class of materials that exhibit several advantages. Hierarchical zeolites used as drug carriers enable delayed release of the active ingredient and promote drug transport to the desired tissues and organs. In addition, hierarchical zeolites play an important role in regulating micronutrient levels in the body and have been used successfully in cancer diagnosis and therapy. To apply curcumin to hierarchical zeolites synthesized from commercial FAU zeolite, solutions containing curcumin, carrier and acetone were prepared. The prepared mixtures were then stirred on a magnetic stirrer for 24 h at room temperature. The curcumin-filled hierarchical zeolites were drained into a glass funnel, where they were washed three times with acetone and distilled water, after which the obtained material was air-dried until completely dry. In addition, the effect of piperine addition to zeolite carrier containing a sufficient amount of curcumin was studied. The resulting products were weighed and the percentage of pure curcumin in the hierarchical zeolite was calculated. All the synthesized materials were characterized by several techniques: elemental analysis, transmission electron microscopy (TEM), Fourier transform infrared spectroscopy, Fourier transform infrared (FT-IR), N2 adsorption, and X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The aim of the presented study was to improve the biological activity of curcumin by applying it to hierarchical zeolites based on FAU zeolite. The results showed that the loading efficiency of curcumin into hierarchical zeolites based on commercial FAU-type zeolite is enhanced by modifying the zeolite carrier itself. The hierarchical zeolites proved to be very good and efficient carriers of plant-derived active ingredients such as curcumin.Keywords: carriers of active substances, curcumin, hierarchical zeolites, incorporation
Procedia PDF Downloads 991129 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study
Authors: Diaa Ahmed Mohamed Ahmedien
Abstract:
Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.Keywords: Egyptian community, holographic art, laser art, visual art
Procedia PDF Downloads 4801128 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation: Antimicrobial, Antioxidant, and Physicochemical Investigations
Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E. A. Awad
Abstract:
Numerous attempts are being performed in order to formulate suitable packaging materials for the meat products. However, to the best of our knowledge, the incorporation of the free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for the meats is seldom reported. Therefore, this study aims at the protection of the aqueous crude extract of the hibiscus flowers utilizing the spry drying encapsulation technique. Results of the Fourier transform infrared (FTIR), the scanning electron microscope (SEM), and the particle size analyzer confirmed the successful formation of the assembled capsules via strong interactions, the spherical rough microparticles, and the particle size of ~ 235 nm, respectively. Also, the obtained microcapsules enjoy higher thermal stability than the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration of 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against the microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to the PVA matrix. Application of the prepared films on the real meat samples displayed a low bacterial growth with a slight increase in the pH over the storage time which continued up to 10 days at 4 oC, as further evidence to the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of the prepared composite films pave the way towards combined active and smart food packaging applications. This would play a vital role in the food hygiene, including also the quality control and the assurance.Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage
Procedia PDF Downloads 911127 Vapour Liquid Equilibrium Measurement of CO₂ Absorption in Aqueous 2-Aminoethylpiperazine (AEP)
Authors: Anirban Dey, Sukanta Kumar Dash, Bishnupada Mandal
Abstract:
Carbondioxide (CO2) is a major greenhouse gas responsible for global warming and fossil fuel power plants are the main emitting sources. Therefore the capture of CO2 is essential to maintain the emission levels according to the standards. Carbon capture and storage (CCS) is considered as an important option for stabilization of atmospheric greenhouse gases and minimizing global warming effects. There are three approaches towards CCS: Pre combustion capture where carbon is removed from the fuel prior to combustion, Oxy-fuel combustion, where coal is combusted with oxygen instead of air and Post combustion capture where the fossil fuel is combusted to produce energy and CO2 is removed from the flue gases left after the combustion process. Post combustion technology offers some advantage as existing combustion technologies can still be used without adopting major changes on them. A number of separation processes could be utilized part of post –combustion capture technology. These include (a) Physical absorption (b) Chemical absorption (c) Membrane separation (d) Adsorption. Chemical absorption is one of the most extensively used technologies for large scale CO2 capture systems. The industrially important solvents used are primary amines like Monoethanolamine (MEA) and Diglycolamine (DGA), secondary amines like diethanolamine (DEA) and Diisopropanolamine (DIPA) and tertiary amines like methyldiethanolamine (MDEA) and Triethanolamine (TEA). Primary and secondary amines react fast and directly with CO2 to form stable carbamates while Tertiary amines do not react directly with CO2 as in aqueous solution they catalyzes the hydrolysis of CO2 to form a bicarbonate ion and a protonated amine. Concentrated Piperazine (PZ) has been proposed as a better solvent as well as activator for CO2 capture from flue gas with a 10 % energy benefit compared to conventional amines such as MEA. However, the application of concentrated PZ is limited due to its low solubility in water at low temperature and lean CO2 loading. So following the performance of PZ its derivative 2-Aminoethyl piperazine (AEP) which is a cyclic amine can be explored as an activator towards the absorption of CO2. Vapour liquid equilibrium (VLE) in CO2 capture systems is an important factor for the design of separation equipment and gas treating processes. For proper thermodynamic modeling accurate equilibrium data for the solvent system over a wide range of temperatures, pressure and composition is essential. The present work focuses on the determination of VLE data for (AEP + H2O) system at 40 °C for various composition range.Keywords: absorption, aminoethyl piperazine, carbondioxide, vapour liquid equilibrium
Procedia PDF Downloads 2701126 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics
Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty
Abstract:
Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC
Procedia PDF Downloads 2231125 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure
Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar
Abstract:
Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism
Procedia PDF Downloads 1611124 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 3421123 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 2191122 Mechanical Behavior of Sandwiches with Various Glass Fiber/Epoxy Skins under Bending Load
Authors: Emre Kara, Metehan Demir, Şura Karakuzu, Kadir Koç, Ahmet F. Geylan, Halil Aykul
Abstract:
While the polymeric foam cored sandwiches have been realized for many years, recently there is a growing and outstanding interest on the use of sandwiches consisting of aluminum foam core because of their some of the distinct mechanical properties such as high bending stiffness, high load carrying and energy absorption capacities. These properties make them very useful in the transportation industry (automotive, aerospace, shipbuilding industry), where the "lightweight design" philosophy and the safety of vehicles are very important aspects. Therefore, in this study, the sandwich panels with aluminum alloy foam core and various types and thicknesses of glass fiber reinforced polymer (GFRP) skins produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique were obtained by using a commercial toughened epoxy based adhesive with two components. The aim of this contribution was the analysis of the bending response of sandwiches with various glass fiber reinforced polymer skins. The three point bending tests were performed on sandwich panels at different values of support span distance using a universal static testing machine in order to clarify the effects of the type and thickness of the GFRP skins in terms of peak load, energy efficiency and absorbed energy values. The GFRP skins were easily bonded to the aluminum alloy foam core under press machine with a very low pressure. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, collapse mechanisms and the influence of the support span length and GFRP skins. The obtained results of the experimental investigation presented that the sandwich with the skin made of thicker S-Glass fabric failed at the highest load and absorbed the highest amount of energy compared to the other sandwich specimens. The increment of the support span distance made the decrease of the peak force and absorbed energy values for each type of panels. The common collapse mechanism of the panels was obtained as core shear failure which was not affected by the skin materials and the support span distance.Keywords: aluminum foam, collapse mechanisms, light-weight structures, transport application
Procedia PDF Downloads 3991121 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers
Authors: Raúl Jiménez Vilches
Abstract:
Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.Keywords: focus, prosody, public speaking, Spanish learners of English
Procedia PDF Downloads 1011120 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 1931119 Logistics and Supply Chain Management Using Smart Contracts on Blockchain
Authors: Armen Grigoryan, Milena Arakelyan
Abstract:
The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management
Procedia PDF Downloads 971118 Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program
Authors: Kari Björn, Mikael Soini
Abstract:
Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger. A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.Keywords: engineering education, integrated curriculum, learning outcomes, performance measurement
Procedia PDF Downloads 2421117 Modernization of Translation Studies Curriculum at Higher Education Level in Armenia
Authors: A. Vahanyan
Abstract:
The paper touches upon the problem of revision and modernization of the current curriculum on translation studies at the Armenian Higher Education Institutions (HEIs). In the contemporary world where quality and speed of services provided are mostly valued, certain higher education centers in Armenia though do not demonstrate enough flexibility in terms of the revision and amendment of courses taught. This issue is present for various curricula at the university level and Translation Studies related curriculum, in particular. Technological innovations that are of great help for translators have been long ago smoothly implemented into the global Translation Industry. According to the European Master's in Translation (EMT) framework, translation service provision comprises linguistic, intercultural, information mining, thematic, and technological competencies. Therefore, to form the competencies mentioned above, the curriculum should be seriously restructured to meet the modern education and job market requirements, relevant courses should be proposed. New courses, in particular, should focus on the formation of technological competences. These suggestions have been made upon the author’s research of the problem across various HEIs in Armenia. The updated curricula should include courses aimed at familiarization with various computer-assisted translation (CAT) tools (MemoQ, Trados, OmegaT, Wordfast, etc.) in the translation process, creation of glossaries and termbases compatible with different platforms), which will ensure consistency in translation of similar texts and speeding up the translation process itself. Another aspect that may be strengthened via curriculum modification is the introduction of interdisciplinary and Project-Based Learning courses, which will enable info mining and thematic competences, which are of great importance as well. Of course, the amendment of the existing curriculum with the mentioned courses will require corresponding faculty development via training, workshops, and seminars. Finally, the provision of extensive internship with translation agencies is strongly recommended as it will ensure the synthesis of theoretical background and practical skills highly required for the specific area. Summing up, restructuring and modernization of the existing curricula on Translation Studies should focus on three major aspects, i.e., introduction of new courses that meet the global quality standards of education, professional development for faculty, and integration of extensive internship supervised by experts in the field.Keywords: competencies, curriculum, modernization, technical literacy, translation studies
Procedia PDF Downloads 131