Search results for: original metaphor
146 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation
Procedia PDF Downloads 145145 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 31144 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101143 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies
Authors: Manel Hammami, Gabriele Grandi
Abstract:
In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter
Procedia PDF Downloads 208142 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating
Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon
Abstract:
Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering
Procedia PDF Downloads 254141 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical
Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani
Abstract:
Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality
Procedia PDF Downloads 351140 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 130139 Piled Critical Size Bone-Biomimetic and Biominerizable Nanocomposites: Formation of Bioreactor-Induced Stem Cell Gradients under Perfusion and Compression
Authors: W. Baumgartner, M. Welti, N. Hild, S. C. Hess, W. J. Stark, G. Meier Bürgisser, P. Giovanoli, J. Buschmann
Abstract:
Perfusion bioreactors are used to solve problems in tissue engineering in terms of sufficient nutrient and oxygen supply. Such problems especially occur in critical size grafts because vascularization is often too slow after implantation ending up in necrotic cores. Biominerizable and biocompatible nanocomposite materials are attractive and suitable scaffold materials for bone tissue engineering because they offer mineral components in organic carriers – mimicking natural bone tissue. In addition, human adipose derived stem cells (ASCs) can potentially be used to increase bone healing as they are capable of differentiating towards osteoblasts or endothelial cells among others. In the present study, electrospun nanocomposite disks of poly-lactic-co-glycolic acid and amorphous calcium phosphate nanoparticles (PLGA/a-CaP) were seeded with human ASCs and eight disks were stacked in a bioreactor running with normal culture medium (no differentiation supplements). Under continuous perfusion and uniaxial cyclic compression, load-displacement curves as a function of time were assessed. Stiffness and energy dissipation were recorded. Moreover, stem cell densities in the layers of the piled scaffold were determined as well as their morphologies and differentiation status (endothelial cell differentiation, chondrogenesis and osteogenesis). While the stiffness of the cell free constructs increased over time caused by the transformation of the a-CaP nanoparticles into flake-like apatite, ASC-seeded constructs showed a constant stiffness. Stem cell density gradients were histologically determined with a linear increase in the flow direction from the bottom to the top of the 3.5 mm high pile (r2 > 0.95). Cell morphology was influenced by the flow rate, with stem cells getting more roundish at higher flow rates. Less than 1 % osteogenesis was found upon osteopontin immunostaining at the end of the experiment (9 days), while no endothelial cell differentiation and no chondrogenesis was triggered under these conditions. All ASCs had mainly remained in their original pluripotent status within this time frame. In summary, we have fabricated a critical size bone graft based on a biominerizable bone-biomimetic nanocomposite with preserved stiffness when seeded with human ASCs. The special feature of this bone graft was that ASC densities inside the piled construct varied with a linear gradient, which is a good starting point for tissue engineering interfaces such as bone-cartilage where the bone tissue is cell rich while the cartilage exhibits low cell densities. As such, this tissue-engineered graft may act as a bone-cartilage interface after the corresponding differentiation of the ASCs.Keywords: bioreactor, bone, cartilage, nanocomposite, stem cell gradient
Procedia PDF Downloads 308138 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 108137 Performing Arts and Performance Art: Interspaces and Flexible Transitions
Authors: Helmi Vent
Abstract:
This four-year artistic research project has set the goal of exploring the adaptable transitions within the realms between the two genres. This paper will single out one research question from the entire project for its focus, namely on how and under what circumstances such transitions between a reinterpretation and a new creation can take place during the performative process. The film documentation that accompany the project were produced at the Mozarteum University in Salzburg, Austria, as well as on diverse everyday stages at various locations. The model institution that hosted the project is the LIA – Lab Inter Arts, under the direction of Helmi Vent. LIA combines artistic research with performative applications. The project participants are students from various artistic fields of study. The film documentation forms a central platform for the entire project. They function as audiovisual records of performative performative origins and development processes, while serving as the basis for analysis and evaluation, including the self-evaluation of the recorded material and they also serve as illustrative and discussion material in relation to the topic of this paper. Regarding the “interspaces” and variable 'transitions': The performing arts in the western cultures generally orient themselves toward existing original compositions – most often in the interconnected fields of music, dance and theater – with the goal of reinterpreting and rehearsing a pre-existing score, choreographed work, libretto or script and presenting that respective piece to an audience. The essential tool in this reinterpretation process is generally the artistic ‘language’ performers learn over the course of their main studies. Thus, speaking is combined with singing, playing an instrument is combined with dancing, or with pictorial or sculpturally formed works, in addition to many other variations. If the Performing Arts would rid themselves of their designations from time to time and initially follow the emerging, diffusely gliding transitions into the unknown, the artistic language the performer has learned then becomes a creative resource. The illustrative film excerpts depicting the realms between Performing Arts and Performance Art present insights into the ways the project participants embrace unknown and explorative processes, thus allowing the genesis of new performative designs or concepts to be invented between the participants’ acquired cultural and artistic skills and their own creations – according to their own ideas and issues, sometimes with their direct involvement, fragmentary, provisional, left as a rough draft or fully composed. All in all, it is an evolutionary process and its key parameters cannot be distilled down to their essence. Rather, they stem from a subtle inner perception, from deep-seated emotions, imaginations, and non-discursive decisions, which ultimately result in an artistic statement rising to the visible and audible surface. Within these realms between performing arts and performance art and their extremely flexible transitions, exceptional opportunities can be found to grasp and realise art itself as a research process.Keywords: art as research method, Lab Inter Arts ( LIA ), performing arts, performance art
Procedia PDF Downloads 272136 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 12135 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 281134 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 186133 Polyurethane Membrane Mechanical Property Study for a Novel Carotid Covered Stent
Authors: Keping Zuo, Jia Yin Chia, Gideon Praveen Kumar Vijayakumar, Foad Kabinejadian, Fangsen Cui, Pei Ho, Hwa Liang Leo
Abstract:
Carotid artery is the major vessel supplying blood to the brain. Carotid artery stenosis is one of the three major causes of stroke and the stroke is the fourth leading cause of death and the first leading cause of disability in most developed countries. Although there is an increasing interest in carotid artery stenting for treatment of cervical carotid artery bifurcation therosclerotic disease, currently available bare metal stents cannot provide an adequate protection against the detachment of the plaque fragments over diseased carotid artery, which could result in the formation of micro-emboli and subsequent stroke. Our research group has recently developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet retaining the ability to preserve the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid therosclerotic stenosis. The purpose of this study is to evaluate the biomechanical property of PU membrane of different concentration configurations in order to refine the stent coating technique and enhance the clinical performance of our novel carotid covered stent. Results from this study also provide necessary material property information crucial for accurate simulation analysis for our stents. Method: Medical grade Polyurethane (ChronoFlex AR) was used to prepare PU membrane specimens. Different PU membrane configurations were subjected to uniaxial test: 22%, 16%, and 11% PU solution were made by mixing the original solution with proper amount of the Dimethylacetamide (DMAC). The specimens were then immersed in physiological saline solution for 24 hours before test. All specimens were moistened with saline solution before mounting and subsequent uniaxial testing. The specimens were preconditioned by loading the PU membrane sample to a peak stress of 5.5 Mpa for 10 consecutive cycles at a rate of 50 mm/min. The specimens were then stretched to failure at the same loading rate. Result: The results showed that the stress-strain response curves of all PU membrane samples exhibited nonlinear characteristic. For the ultimate failure stress, 22% PU membrane was significantly higher than 16% (p<0.05). In general, our preliminary results showed that lower concentration PU membrane is stiffer than the higher concentration one. From the perspective of mechanical properties, 22% PU membrane is a better choice for the covered stent. Interestingly, the hyperelastic Ogden model is able to accurately capture the nonlinear, isotropic stress-strain behavior of PU membrane with R2 of 0.9977 ± 0.00172. This result will be useful for future biomechanical analysis of our stent designs and will play an important role for computational modeling of our covered stent fatigue study.Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain
Procedia PDF Downloads 312132 The Charge Exchange and Mixture Formation Model in the ASz-62IR Radial Aircraft Engine
Authors: Pawel Magryta, Tytus Tulwin, Paweł Karpiński
Abstract:
The ASz62IR engine is a radial aircraft engine with 9 cylinders. This object is produced by the Polish company WSK "PZL-KALISZ" S.A. This is engine is currently being developed by the above company and Lublin University of Technology. In order to provide an effective work of the technological development of this unit it was decided to made the simulation model. The model of ASz-62IR was developed with AVL BOOST software which is a tool dedicated to the one-dimensional modeling of internal combustion engines. This model can be used to calculate parameters of an air and fuel flow in an intake system including charging devices as well as combustion and exhaust flow to the environment. The main purpose of this model is the analysis of the charge exchange and mixture formation in this engine. For this purpose, the model consists of elements such: as air inlet, throttle system, compressor connector, charging compressor, inlet pipes and injectors, outlet pipes, fuel injection and model of fuel mixing and evaporation. The model of charge exchange and mixture formation was based on the model of mass flow rate in intake and exhaust pipes, and also on the calculation of gas properties values like gas constant or thermal capacity. This model was based on the equations to describe isentropic flow. The energy equation to describe flow under steady conditions was transformed into the mass flow equation. In the model the flow coefficient μσ was used, that varies with the stroke/valve opening and was determined in a steady flow state. The geometry of the inlet channels and other key components was mapped with reference to the technical documentation of the engine and empirical measurements of the structure elements. The volume of elements on the charge flow path between the air inlet and the exhaust outlet was measured by the CAD mapping of the structure. Taken from the technical documentation, the original characteristics of the compressor engine was entered into the model. Additionally, the model uses a general model for the transport of chemical compounds of the mixture. There are 7 compounds used, i.e. fuel, O2, N2, CO2, H2O, CO, H2. A gasoline fuel of a calorific value of 43.5 MJ/kg and an air mass fraction for stoichiometric mixture of 14.5 were used. Indirect injection into the intake manifold is used in this model. The model assumes the following simplifications: the mixture is homogenous at the beginning of combustion, accordingly, mixture stoichiometric coefficient A/F remains constant during combustion, combusted and non-combusted charges show identical pressures and temperatures although their compositions change. As a result of the simulation studies based on the model described above, the basic parameters of combustion process, charge exchange, mixture formation in cylinders were obtained. The AVL Boost software is very useful for the piston engine performance simulations. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: aviation propulsion, AVL Boost, engine model, charge exchange, mixture formation
Procedia PDF Downloads 340131 Linguistic and Cultural Human Rights for Indigenous Peoples in Education
Authors: David Hough
Abstract:
Indigenous peoples can generally be described as the original or first peoples of a land prior to colonization. While there is no single definition of indigenous peoples, the United Nations has developed a general understanding based on self-identification and historical continuity with pre-colonial societies. Indigenous peoples are often traditional holders of unique languages, knowledge systems and beliefs who possess valuable knowledge and practices which support sustainable management of natural resources. They often have social, economic, political systems, languages and cultures, which are distinct from dominant groups in the society or state where they live. They generally resist attempts by the dominant culture at assimilation and endeavour to maintain and reproduce their ancestral environments and systems as distinctive peoples and communities. In 2007, the United Nations General Assembly passed a declaration on the rights of indigenous peoples, known as UNDRIP. It (in addition to other international instruments such as ILO 169), sets out far-reaching guidelines, which – among other things – attempt to protect and promote indigenous languages and cultures. Paragraphs 13 and 14 of the declaration state the following regarding language, culture and education: Article 13, Paragraph 1: Indigenous peoples have the right to revitalize, use, develop and transmit for future generations their histories, languages, oral traditions, philosophies, writing systems, and literatures, and to designate and retain their own names for communities, places and persons. Article 14, Paragraph I: Indigenous peoples have the right to establish and control their educational systems and institutions providing education in their own languages, in a manner appropriate to their cultural methods of teaching and learning. These two paragraphs call for the right of self-determination in education. Paragraph 13 gives indigenous peoples the right to control the content of their teaching, while Paragraph 14 states that the teaching of this content should be based on methods of teaching and learning which are appropriate to indigenous peoples. This paper reviews an approach to furthering linguistic and cultural human rights for indigenous peoples in education, which supports UNDRIP. It has been employed in countries in Asia and the Pacific, including the Republic of the Marshall Islands, the Federated States of Micronesia, Far East Russia and Nepal. It is based on bottom-up community-based initiatives where students, teachers and local knowledge holders come together to produce classroom materials in their own languages that reflect their traditional beliefs and value systems. They may include such things as knowledge about herbal medicines and traditional healing practices, local history, numerical systems, weights and measures, astronomy and navigation, canoe building, weaving and mat making, life rituals, feasts, festivals, songs, poems, etc. Many of these materials can then be mainstreamed into math, science language arts and social studies classes.Keywords: Indigenous peoples, linguistic and cultural human rights, materials development, teacher training, traditional knowledge
Procedia PDF Downloads 250130 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 148129 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 204128 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century
Authors: G. Iacoviello, A. Lazzini
Abstract:
This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.Keywords: education system, government, knowledge, power
Procedia PDF Downloads 140127 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment
Authors: Hamed Delrobai Ghoochan Atigh
Abstract:
Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil
Procedia PDF Downloads 103126 From Oral to Written: Translating the Dawot (Epic Poem), Revitalizing Appreciation for Indigenous Literature
Authors: Genevieve Jorolan-Quintero
Abstract:
The recording as well as the preservation of indigenous literature is an important task as it deals with a significant heritage of pre-colonial culture. The beliefs and traditions of a people are reflected in their oral narratives, such as the folk epic, which must be written down to insure their preservation. The epic poem for instance, known as dawot among the Mandaya, one of the indigenous communities in the southern region of the Philippines, narrates the customs, the ways of life, and the adventures of an ancient people. Nabayra, an expert on Philippine folkloric studies, stresses that still extant after centuries of unknown origin, the dawot was handed down to the magdadawot (bard) by word of mouth, forming the greatest bulk of Mandaya oral tradition. Unhampered by modern means of communication to distract her/him, the magdadawot has a sharp memory of the intricacies of the ancient art of chanting the panayday (verses) of the epic poem. The dawot has several hullubaton (episodes), each of which takes several nights to chant . The language used in these oral traditions is archaic Mandaya, no longer spoken or clearly understood by the present generation. There is urgency to the task of recording and writing down what remain of the epic poem since the singers and storytellers who have retained the memory and the skill of chanting and narrating the dawot and other forms of oral tradition in their original forms are getting fewer. The few who are gifted and skilled to transmit these ancient arts and wisdom are old and dying. Unlike the other Philippine epics (i.e. the Darangen, the Ulahingan, the Hinilawod, etc.), the Mandaya epic is yet to be recognized and given its rightful place among the recorded epics in Philippine Folk Literature. The general aim of this study was to put together and preserve an intangible heritage, the Mandaya hullubaton (episodes of the dawot), in order to preserve and promote appreciation for the oral traditions and cultural legacy of the Mandaya. It was able to record, transcribe, and translate four hullubaton of the folk epic into two languages, Visayan and English to insure understanding of their contents and significance among non-Mandaya audiences. Evident in the contents of the episodes are the cultural practices, ideals, life values, and traditions of the ancient Mandaya. While the conquests and adventures of the Mandaya heroes Lumungtad, Dilam, and Gambong highlight heroic virtues, the role of the Mandaya matriarch in family affairs is likewise stressed. The recording and the translation of the hullubaton and the dawot into commonly spoken languages will not only promote knowledge and understanding about their culture, but will also stimulate in the members of this cultural community a sense of pride for their literature and culture. Knowledge about indigenous cultural system and philosophy derived from their oral literature will serve as a springboard to further comparative researches dealing with indigenous mores and belief systems among the different tribes in the Philippines, in Asia, in Africa, and other countries in the world.Keywords: Dawot, epic poem, Mandaya, Philippine folk literature
Procedia PDF Downloads 445125 The Impact of Non State Actor’s to Protect Refugees in Kurdistan Region of Iraq
Authors: Rozh Abdulrahman Kareem
Abstract:
The displacement of individuals has become a common interest for international players. Mostly occurs in Islamic states, as religion is considered the most common cause of this form of displacement. Therefore, this thesis aims to depict the reality of the situations of the refugees, particularly in KRI, illustrating how they are treated and protected and if the treatment merits the protection clause as envisaged in the 1951 Refugee Convention. Overall, the aim is to touch on the issue of protection by non-governmental organizations and government towards the refugees here. In light of this, it focused on the adequate protection of refugees in relation to the refugee law. In the Middle East, including Iraq, there have been multiple reports on violations of these refugee laws and human rights. Protection involves providing physical security to the concerned parties, functional administration with legal structures, and infrastructural setup that could help citizens exercise rights. The KRI has provided the refugees with various benefits, including education, access to residency, and employment. It also provided transitionary in various social dimensions like gender-based violence. The Convention on Status of Refugees 1951 tried to resolve this problem, whereby the principle of ‘nonrefoulement’ under Article 33 was passed. The ‘nonrefoulement’, an exceptional reference, was enacted to protect refugees from forcible return to their countries of the original. However, the convention never addressed an unusual scenario regarding the application of this principle, ‘Extradition Treaties.’ Even though some scholarly article exists regarding the problems of refugees, the topic of interplay between Nonrefoulement and Extradition Treaties has never been explained in detail in the available books on refugee laws and practices. Each year, millions of refugees seek protection from foreign countries for fear of being tortured, victimized, or executed. People seeking international protection are susceptible and insecure. The main objective of the prevention is to provide security to citizens susceptible to inhuman treatment, distress, oppression, or other human rights defilements when they arrive back in their own countries. The refugee situation may get worse in the near future. Just like several nations within the Middle East, Iraq is not a signatory to the globally acknowledged legal structure for the protection of refugees. The first law of 1971 in Iraq was issued only for military or political causes. This law also establishes benefits such as the right to education and health services and the right to acquire employment just as the Iraqi nationals. The other legislative instrument is the 21st law from the ministry of migration of Iraq widened the description of an immigrant to incorporate the definition from the refugee resolution. Nonetheless, there is a lack of overall consistency in the protection provided under these legislations regarding rights and entitlement. A Memorandum of Understanding was signed in October 2016 by the UNHCR and the Iraq government to develop the protection of refugees. Under the term of this MoU, the Iraqi Government is obligated to provide identity documents to asylum seekers beside that UNHCR provides more guidance.Keywords: law, refugee, protection, Kurdistan
Procedia PDF Downloads 64124 Inverted Diameter-Limit Thinning: A Promising Alternative for Mixed Populus tremuloides Stands Management
Authors: Ablo Paul Igor Hounzandji, Benoit Lafleur, Annie DesRochers
Abstract:
Introduction: Populus tremuloides [Michx] regenerates rapidly and abundantly by root suckering after harvest, creating stands with interconnected stems. Pre-commercial thinning can be used to concentrate growth on fewer stems to reach merchantability faster than un-thinned stands. However, conventional thinning methods are typically designed to reach even spacing between residual stems (1,100 stem ha⁻¹, evenly distributed), which can lead to treated stands consisting of weaker/smaller stems compared to the original stands. Considering the nature of P. tremuloides's regeneration, with large underground biomass of interconnected roots, aiming to keep the most vigorous and largest stems, regardless of their spatial distribution, inverted diameter-limit thinning could be more beneficial to post-thinning stand productivity because it would reduce the imbalance between roots and leaf area caused by thinning. Aims: This study aimed to compare stand and stem productivity of P. tremuloides stands thinned with a conventional thinning treatment (CT; 1,100 stem ha⁻¹, evenly distributed), two levels of inverted diameter-limit thinning (DL1 and DL2, keeping the largest 1100 or 2200 stems ha⁻¹, respectively, regardless of their spatial distribution) and a control unthinned treatment. Because DL treatments can create substantial or frequent gaps in the thinned stands, we also aimed to evaluate the potential of this treatment to recreate mixed conifer-broadleaf stands by fill-planting Picea glauca seedlings. Methods: Three replicate 21 year-old sucker-regenerated aspen stands were thinned in 2010 according to four treatments: CT, DL1, DL2, and un-thinned control. Picea glauca seedlings were underplanted in gaps created by the DL1 and DL2 treatments. Stand productivity per hectare, stem quality (diameter and height, volume stem⁻¹) and survival and height growth of fill-planted P. glauca seedlings were measured 8 year post-treatments. Results: Productivity, volume, diameter, and height were better in the treated stands (CT, DL1, and DL2) than in the un-thinned control. Productivity of CT and DL1 stands was similar 4.8 m³ ha⁻¹ year⁻¹. At the tree level, diameter and height of the trees in the DL1 treatment were 5% greater than those in the CT treatment. The average volume of trees in the DL1 treatment was 11% higher than the CT treatment. Survival after 8 years of fill planted P. glauca seedlings was 2% greater in the DL1 than in the DL2 treatment. DL1 treatment also produced taller seedlings (+20 cm). Discussion: Results showed that DL treatments were effective in producing post-thinned stands with larger stems without affecting stand productivity. In addition, we showed that these treatments were suitable to introduce slower growing conifer seedlings such as Picea glauca in order to re-create or maintain mixed stands despite the aggressive nature of P. tremuloides sucker regeneration.Keywords: Aspen, inverted diameter-limit, mixed forest, populus tremuloides, silviculture, thinning
Procedia PDF Downloads 148123 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 92122 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures
Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju
Abstract:
Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability
Procedia PDF Downloads 68121 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 265120 Hiveopolis - Honey Harvester System
Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios
Abstract:
Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.Keywords: honey harvesting, honeybee, hiveopolis, nitinol
Procedia PDF Downloads 109119 The Psycho-Linguistic Aspect of Translation Gaps in Teaching English for Specific Purposes
Authors: Elizaveta Startseva, Elena Notina, Irina Bykova, Valentina Ulyumdzhieva, Natallia Zhabo
Abstract:
With the various existing models of intercultural communication that contain a vast number of stages for foreign language acquisition, there is a need for conscious perception of the foreign culture. Such a process is associated with the emergence of linguistic conflict with the consistent students’ desire to solve the problem of the language differences, along with cultural discrepancies. The aim of this study is to present the modern ways and methods of removing psycholinguistic conflict through skills development in professional translation and intercultural communication. The study was conducted in groups of 1-4-year students of Medical Institute and Agro-Technological Institute RUDN university. In the course of training, students got knowledge in such disciplines as basic grammar and vocabulary of the English language, phonetics, lexicology, introduction to linguistics, theory of translation, annotating and referencing media texts and texts in specialty. The students learned to present their research work, participated in the University and exit conferences with their reports and presentations. Common strategies of removing linguistic and cultural conflict can be attributed to the development of such abilities of a language personality as a commitment to communication and cooperation, the formation of cultural awareness and empathy of other cultures of the individual, realistic self-esteem, emotional stability, tolerance, etc. The process of mastering a foreign language and culture of the target language leads to a reduplication of linguistic identity, which leads to successive formation of the so-called 'secondary linguistic personality.' In our study, we tried to approach the problem comprehensively, focusing on the translation gaps for technical and non-technical language still missing such a typology which could classify all of the lacunas on the same principle. When obtaining the background knowledge, students learn to overcome the difficulties posed by the national-specific and linguistic differences of cultures in contact, i.e., to eliminate the gaps (to fill in and compensate). Compensation gaps is a means of fixing it, the initial phase of elimination, followed in some cases and some not is filling semantic voids (plenus). The concept of plenus occurs in most cases of translation gaps, for example in the transcription and transliteration of (intercultural and exoticism), the replication (reproduction of the morphemic structure of words or idioms. In all the above cases the task of the translator is to ensure an identical response of the receptors of the original and translated texts, since any statement is created with the goal of obtaining communicative effect, and hence pragmatic potential is the most important part of its contents. The practical value of our work lies in improving the methodology of teaching English for specific purposes on the basis of psycholinguistic concept of the secondary language personality.Keywords: lacuna, language barrier, plenus, secondary language personality
Procedia PDF Downloads 291118 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study
Authors: Richard Renou, Laurent Soulard
Abstract:
Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.Keywords: densification, molecular dynamics simulations, shock loading, silica glass
Procedia PDF Downloads 222117 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 447