Search results for: ABC equipment classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3568

Search results for: ABC equipment classification

328 Illegal Anthropogenic Activity Drives Large Mammal Population Declines in an African Protected Area

Authors: Oluseun A. Akinsorotan, Louise K. Gentle, Md. Mofakkarul Islam, Richard W. Yarnell

Abstract:

High levels of anthropogenic activity such as habitat destruction, poaching and encroachment into natural habitat have resulted in significant global wildlife declines. In order to protect wildlife, many protected areas such as national parks have been created. However, it is argued that many protected areas are only protected in name and are often exposed to continued, and often illegal, anthropogenic pressure. In West African protected areas, declines of large mammals have been documented between 1962 and 2008. This study aimed to produce occupancy estimates of the remaining large mammal fauna in the third largest National Park in Nigeria, Old Oyo, and to compare the estimates with historic estimates while also attempting to quantify levels of illegal anthropogenic activity using a multi-disciplinary approach. Large mammal populations and levels of illegal anthropogenic activity were assessed using empirical field data (camera trapping and transect surveys) in combination with data from questionnaires completed by local villagers and park rangers. Four of the historically recorded species in the park, lion (Panthera leo), hunting dog (Lycaon pictus), elephant (Loxodonta africana) and buffalo (Syncerus caffer) were not detected during field studies nor were they reported by respondents. In addition, occupancy estimates of hunters and illegal grazers were higher than the majority of large mammal species inside the park. This finding was reinforced by responses from the villagers and rangers who’s perception was that large mammal densities in the park were declining, and that a large proportion of the local people were entering the park to hunt wild animals and graze their domestic livestock. Our findings also suggest that widespread poverty and a lack of alternative livelihood opportunities, culture of consuming bushmeat, lack of education and awareness of the value of protected areas, and weak law enforcement are some of the reasons for the illegal activity. Law enforcement authorities were often constrained by insufficient on-site personnel and a lack of modern equipment and infrastructure to deter illegal activities. We conclude that there is a need to address the issue of illegal hunting and livestock grazing, via provision of alternative livelihoods, in combination with community outreach programmes that aim to improve conservation education and awareness and develop the capacity of the conservation authorities in order to achieve conservation goals. Our findings have implications for the conservation management of all protected areas that are available for exploitation by local communities.

Keywords: camera trapping, conservation, extirpation, illegal grazing, large mammals, national park, occupancy estimates, poaching

Procedia PDF Downloads 271
327 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 173
326 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 27
325 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 101
324 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation

Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.

Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery

Procedia PDF Downloads 84
323 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience

Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk

Abstract:

Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.

Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors

Procedia PDF Downloads 64
322 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 79
321 Fiberoptic Intubation Skills Training Improves Emergency Medicine Resident Comfort Using Modality

Authors: Nicholus M. Warstadt, Andres D. Mallipudi, Oluwadamilola Idowu, Joshua Rodriguez, Madison M. Hunt, Soma Pathak, Laura P. Weber

Abstract:

Endotracheal intubation is a core procedure performed by emergency physicians. This procedure is a high risk, and failure results in substantial morbidity and mortality. Fiberoptic intubation (FOI) is the standard of care in difficult airway protocols, yet no widespread practice exists for training emergency medicine (EM) residents in the technical acquisition of FOI skills. Simulation on mannequins is commonly utilized to teach advanced airway techniques. As part of a program to introduce FOI into our ED, residents received hands-on training in FOI as part of our weekly resident education conference. We hypothesized that prior to the hands-on training, residents had little experience with FOI and were uncomfortable with using fiberoptic as a modality. We further hypothesized that resident comfort with FOI would increase following the training. The education intervention consisted of two hours of focused airway teaching and skills acquisition for PGY 1-4 residents. One hour was dedicated to four case-based learning stations focusing on standard, pediatric, facial trauma, and burn airways. Direct, video, and fiberoptic airway equipment were available to use at the residents’ discretion to intubate mannequins at each station. The second hour involved direct instructor supervision and immediate feedback during deliberate practice for FOI of a mannequin. Prior to the hands-on training, a pre-survey was sent via email to all EM residents at NYU Grossman School of Medicine. The pre-survey asked how many FOI residents have performed in the ED, OR, and on a mannequin. The pre-survey and a post-survey asked residents to rate their comfort with FOI on a 5-point Likert scale ("extremely uncomfortable", "somewhat uncomfortable", "neither comfortable nor uncomfortable", "somewhat comfortable", and "extremely comfortable"). The post-survey was administered on site immediately following the training. A two-sample chi-square test of independence was calculated comparing self-reported resident comfort on the pre- and post-survey (α ≤ 0.05). Thirty-six of a total of 70 residents (51.4%) completed the pre-survey. Of pre-survey respondents, 34 residents (94.4%) had performed 0, 1 resident (2.8%) had performed 1, and 1 resident (2.8%) had performed 2 FOI in the ED. Twenty-five residents (69.4%) had performed 0, 6 residents (16.7%) had performed 1, 2 residents (5.6%) had performed 2, 1 resident (2.8%) had performed 3, and 2 residents (5.6%) had performed 4 FOI in the OR. Seven residents (19.4%) had performed 0, and 16 residents (44.4%) had performed 5 or greater FOI on a mannequin. 29 residents (41.4%) attended the hands-on training, and 27 out of 29 residents (93.1%) completed the post-survey. Self-reported resident comfort with FOI significantly increased in post-survey compared to pre-survey questionnaire responses (p = 0.00034). Twenty-one of 27 residents (77.8%) report being “somewhat comfortable” or “extremely comfortable” with FOI on the post-survey, compared to 9 of 35 residents (25.8%) on the pre-survey. We show that dedicated FOI training is associated with increased learner comfort with such techniques. Further direction includes studying technical competency, skill retention, translation to direct patient care, and optimal frequency and methodology of future FOI education.

Keywords: airway, emergency medicine, fiberoptic intubation, medical simulation, skill acquisition

Procedia PDF Downloads 163
320 The Extension of the Kano Model by the Concept of Over-Service

Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai

Abstract:

It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.

Keywords: consumer satisfaction, DINESERV, kano model, over-service

Procedia PDF Downloads 136
319 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation

Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen

Abstract:

Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.

Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling

Procedia PDF Downloads 62
318 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 234
317 Nonconventional Method for Separation of Rosmarinic Acid: Synergic Extraction

Authors: Lenuta Kloetzer, Alexandra C. Blaga, Dan Cascaval, Alexandra Tucaliuc, Anca I. Galaction

Abstract:

Rosmarinic acid, an ester of caffeic acid and 3-(3,4-dihydroxyphenyl) lactic acid, is considered a valuable compound for the pharmaceutical and cosmetic industries due to its antimicrobial, antioxidant, antiviral, anti-allergic, and anti-inflammatory effects. It can be obtained by extraction from vegetable or animal materials, by chemical synthesis and biosynthesis. Indifferent of the method used for rosmarinic acid production, the separation and purification process implies high amount of raw materials and laborious stages leading to high cost for and limitations of the separation technology. This study focused on separation of rosmarinic acid by synergic reactive extraction with a mixture of two extractants, one acidic (acid di-(2ethylhexyl) phosphoric acid, D2EHPA) and one with basic character (Amberlite LA-2). The studies were performed in experimental equipment consisting of an extraction column where the phases’ mixing was made by mean of a perforated disk with 45 mm diameter and 20% free section, maintained at the initial contact interface between the aqueous and organic phases. The vibrations had a frequency of 50 s⁻¹ and 5 mm amplitude. The extraction was carried out in two solvents with different dielectric constants (n-heptane and dichloromethane) in which the extractants mixture of varying concentration was dissolved. The pH-value of initial aqueous solution was varied between 1 and 7. The efficiency of the studied extraction systems was quantified by distribution and synergic coefficients. For calculating these parameters, the rosmarinic acid concentration in the initial aqueous solution and in the raffinate have been measured by HPLC. The influences of extractants concentrations and solvent polarity on the efficiency of rosmarinic acid separation by synergic extraction with a mixture of Amberlite LA-2 and D2EHPA have been analyzed. In the reactive extraction system with a constant concentration of Amberlite LA-2 in the organic phase, the increase of D2EHPA concentration leads to decrease of the synergic coefficient. This is because the increase of D2EHPA concentration prevents the formation of amine adducts and, consequently, affects the hydrophobicity of the interfacial complex with rosmarinic acid. For these reasons, the diminution of synergic coefficient is more important for dichloromethane. By maintaining a constant value of D2EHPA concentration and increasing the concentration of Amberlite LA-2, the synergic coefficient could become higher than 1, its highest values being reached for n-heptane. Depending on the solvent polarity and D2EHPA amount in the solvent phase, the synergic effect is observed for Amberlite LA-2 concentrations over 20 g/l dissolved in n-heptane. Thus, by increasing the concentration of D2EHPA from 5 to 40 g/l, the minimum concentration value of Amberlite LA-2 corresponding to synergism increases from 20 to 40 g/l for the solvent with lower polarity, namely, n-heptane, while there is no synergic effect recorded for dichloromethane. By analysing the influences of the main factors (organic phase polarity, extractant concentration in the mixture) on the efficiency of synergic extraction of rosmarinic acid, the most important synergic effect was found to correspond to the extractants mixture containing 5 g/l D2EHPA and 40 g/l Amberlite LA-2 dissolved in n-heptane.

Keywords: Amberlite LA-2, di(2-ethylhexyl) phosphoric acid, rosmarinic acid, synergic effect

Procedia PDF Downloads 264
316 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique

Authors: Pavana Basavakumar, Devadas Bhat

Abstract:

Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.

Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes

Procedia PDF Downloads 305
315 Exploration Tools for Tantalum-Bearing Pegmatites along Kibara Belt, Central and Southwestern Uganda

Authors: Sadat Sembatya

Abstract:

Tantalum metal is used in addressing capacitance challenge in the 21st-century technology growth. Tantalum is rarely found in its elemental form. Hence it’s often found with niobium and the radioactive elements of thorium and uranium. Industrial processes are required to extract pure tantalum. Its deposits are mainly oxide associated and exist in Ta-Nb oxides such as tapiolite, wodginite, ixiolite, rutile and pyrochlore-supergroup minerals are of minor importance. The stability and chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum. Each period of Tantalum ore formation is characterized by specific mineralogical and geochemical features. Compositions of Columbite-Group Minerals (CGM) are variable: Fe-rich types predominate in the Man Shield (Sierra Leone), the Congo Craton (DR Congo), the Kamativi Belt (Zimbabwe) and the Jos Plateau (Nigeria). Mn-rich columbite-tantalite is typical of the Alto Ligonha Province (Mozambique), the Arabian-Nubian Shield (Egypt, Ethiopia) and the Tantalite Valley pegmatites (southern Namibia). There are large compositional variations through Fe-Mn fractionation, followed by Nb-Ta fractionation. These are typical for pegmatites usually associated with very coarse quartz-feldspar-mica granites. They are young granitic systems of the Kibara Belt of Central Africa and the Older Granites of Nigeria. Unlike ‘simple’ Be-pegmatites, most Ta-Nb rich pegmatites have the most complex zoning. Hence we need systematic exploration tools to find and rapidly assess the potential of different pegmatites. The pegmatites exist as known deposits (e.g., abandoned mines) and the exposed or buried pegmatites. We investigate rocks and minerals to trace for the possibility of the effect of hydrothermal alteration mainly for exposed pegmatites, do mineralogical study to prove evidence of gradual replacement and geochemistry to report the availability of trace elements which are good indicators of mineralisation. Pegmatites are not good geophysical responders resulting to the exclusion of the geophysics option. As for more advanced prospecting, we bulk samples from different zones first to establish their grades and characteristics, then make a pilot test plant because of big samples to aid in the quantitative characterization of zones, and then drill to reveal distribution and extent of different zones but not necessarily grade due to nugget effect. Rapid assessment tools are needed to assess grade and degree of fractionation in order to ‘rule in’ or ‘rule out’ a given pegmatite for future work. Pegmatite exploration is also unique, high risk and expensive hence right traceability system and certification for 3Ts are highly needed.

Keywords: exploration, mineralogy, pegmatites, tantalum

Procedia PDF Downloads 116
314 Characteristics of Bio-hybrid Hydrogel Materials with Prolonged Release of the Model Active Substance as Potential Wound Dressings

Authors: Katarzyna Bialik-Wąs, Klaudia Pluta, Dagmara Malina, Małgorzata Miastkowska

Abstract:

In recent years, biocompatible hydrogels have been used more and more in medical applications, especially as modern dressings and drug delivery systems. The main goal of this research was the characteristics of bio-hybrid hydrogel materials incorporated with the nanocarrier-drug system, which enable the release in a gradual and prolonged manner, up to 7 days. Therefore, the use of such a combination will provide protection against mechanical damage and adequate hydration. The proposed bio-hybrid hydrogels are characterized by: transparency, biocompatibility, good mechanical strength, and the dual release system, which allows for gradual delivery of the active substance, even up to 7 days. Bio-hybrid hydrogels based on sodium alginate (SA), poly(vinyl alcohol) (PVA), glycerine, and Aloe vera solution (AV) were obtained through the chemical crosslinking method using poly(ethylene glycol) diacrylate as a crosslinking agent. Additionally, a nanocarrier-drug system was incorporated into SA/PVA/AV hydrogel matrix. Here, studies were focused on the release profiles of active substances from bio-hybrid hydrogels using the USP4 method (DZF II Flow-Through System, Erweka GmbH, Langen, Germany). The equipment incorporated seven in-line flow-through diffusion cells. The membrane was placed over support with an orifice of 1,5 cm in diameter (diffusional area, 1.766 cm²). All the cells were placed in a cell warmer connected with the Erweka heater DH 2000i and the Erweka piston pump HKP 720. The piston pump transports the receptor fluid via seven channels to the flow-through cells and automatically adapts the setting of the flow rate. All volumes were measured by gravimetric methods by filling the chambers with Milli-Q water and assuming a density of 1 g/ml. All the determinations were made in triplicate for each cell. The release study of the model active substance was carried out using a regenerated cellulose membrane Spectra/Por®Dialysis Membrane MWCO 6-8,000 Carl Roth® Company. These tests were conducted in buffer solutions – PBS at pH 7.4. A flow rate of receptor fluid of about 4 ml /1 min was selected. The experiments were carried out for 7 days at a temperature of 37°C. The released concentration of the model drug in the receptor solution was analyzed using UV-Vis spectroscopy (Perkin Elmer Company). Additionally, the following properties of the modified materials were studied: physicochemical, structural (FT-IR analysis), morphological (SEM analysis). Finally, the cytotoxicity tests using in vitro method were conducted. The obtained results exhibited that the dual release system allows for the gradual and prolonged delivery of the active substances, even up to 7 days.

Keywords: wound dressings, SA/PVA hydrogels, nanocarrier-drug system, USP4 method

Procedia PDF Downloads 123
313 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership

Authors: Markéta Chmelíková, David Ullrich, Iva Burešová

Abstract:

The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.

Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic

Procedia PDF Downloads 71
312 How Can Food Retailing Benefit from Neuromarketing Research: The Influence of Traditional and Innovative Tools of In-Store Communication on Consumer Reactions

Authors: Jakub Berčík, Elena Horská, Ľudmila Nagyová

Abstract:

Nowadays, the point of sale remains one of the few channels of communication which is not oversaturated yet and has great potential for the future. The fact that purchasing decisions are significantly affected by emotions, while up to 75 % of them are implemented at the point of sale, only demonstrates its importance. The share of impulsive purchases is about 60-75 %, depending on the particular product category. Nevertheless, habits predetermine the content of the shopping cart above all and hence in this regard the role of in-store communication is to disrupt the routine and compel the customer to try something new. This is the reason why it is essential to know how to work with this relatively young branch of marketing communication as efficiently as possible. New global trend in this discipline is evaluating the effectiveness of particular tools in the in-store communication. To increase the efficiency it is necessary to become familiar with the factors affecting the customer both consciously and unconsciously, and that is a task for neuromarketing and sensory marketing. It is generally known that the customer remembers the negative experience much longer and more intensely than the positive ones, therefore it is essential for marketers to avoid this negative experience. The final effect of POP (Point of Purchase) or POS (Point of Sale) tools is conditional not only on their quality and design, but also on the location at the point of sale which contributes to the overall positive atmosphere in the store. Therefore, in-store advertising is increasingly in the center of attention and companies are willing to spend even a third of their marketing communication budget on it. The paper deals with a comprehensive, interdisciplinary research of the impact of traditional as well as innovative tools of in-store communication on the attention and emotional state (valence and arousal) of consumers on the food market. The research integrates measurements with eye camera (Eye tracker) and electroencephalograph (EEG) in real grocery stores as well as in laboratory conditions with the purpose of recognizing attention and emotional response among respondents under the influence of selected tools of in-store communication. The object of the research includes traditional (e.g. wobblers, stoppers, floor graphics) and innovative (e.g. displays, wobblers with LED elements, interactive floor graphics) tools of in-store communication in the fresh unpackaged food segment. By using a mobile 16-channel electroencephalograph (EEG equipment) from the company EPOC, a mobile eye camera (Eye tracker) from the company Tobii and a stationary eye camera (Eye tracker) from the company Gazepoint, we observe the attention and emotional state (valence and arousal) to reveal true consumer preferences using traditional and new unusual communication tools at the point of sale of the selected foodstuffs. The paper concludes with suggesting possibilities for rational, effective and energy-efficient combination of in-store communication tools, by which the retailer can accomplish not only captivating and attractive presentation of displayed goods, but ultimately also an increase in retail sales of the store.

Keywords: electroencephalograph (EEG), emotion, eye tracker, in-store communication

Procedia PDF Downloads 370
311 A Comparative Analysis on Survival in Patients with Node Positive Cutaneous Head and Neck Squamous Cell Carcinoma as per TNM 7th and Tnm 8th Editions

Authors: Petr Daniel Edward Kovarik, Malcolm Jackson, Charles Kelly, Rahul Patil, Shahid Iqbal

Abstract:

Introduction: Recognition of the presence of extra capsular spread (ECS) has been a major change in the TNM 8th edition published by the American Joint Committee on Cancer in 2018. Irrespective of the size or number of lymph nodes, the presence of ECS makes N3b disease a stage IV disease. The objective of this retrospective observational study was to conduct a comparative analysis of survival outcomes in patients with lymph node-positive cutaneous head and neck squamous cell carcinoma (CHNSCC) based on their TNM 7th and TNM 8th editions classification. Materials and Methods: From January 2010 to December 2020, 71 patients with CHNSCC were identified from our centre’s database who were treated with radical surgery and adjuvant radiotherapy. All histopathological reports were reviewed, and comprehensive nodal mapping was performed. The data were collected retrospectively and survival outcomes were compared using TNM 7th and 8th editions. Results: The median age of the whole group of 71 patients was 78 years, range 54 – 94 years, 63 were male and 8 female. In total, 2246 lymph nodes were analysed; 195 were positive for cancer. ECS was present in 130 lymph nodes, which led to a change in TNM staging. The details on N-stage as per TNM 7th edition was as follows; pN1 = 23, pN2a = 14, pN2b = 32, pN2c = 0, pN3 = 2. After incorporating the TNM 8th edition criterion (presence of ECS), the details on N-stage were as follows; pN1 = 6, pN2a = 5, pN2b = 3, pN2c = 0, pN3a = 0, pN3b = 57. This showed an increase in overall stage. According to TNM 7th edition, there were 23 patients were with stage III and remaining 48 patients, stage IV. As per TNM 8th edition, there were only 6 patients with stage III as compared to 65 patients with stage IV. For all patients, 2-year disease specific survival (DSS) and overall survival (OS) were 70% and 46%. 5-year DSS and OS rates were 66% and 20% respectively. Comparing the survival between stage III and stage IV of the two cohorts using both TNM 7th and 8th editions, there is an obvious greater survival difference between the stages if TNM 8th staging is used. However, meaningful statistics were not possible as the majority of patients (n = 65) were with stage IV and only 6 patients were stage III in the TNM 8th cohort. Conclusion: Our study provides a comprehensive analysis on lymph node data mapping in this specific patient population. It shows a better differentiation between stage III and stage IV in the TNM 8th edition as compared to TNM 7th however meaningful statistics were not possible due to the imbalance of patients in the sub-cohorts of the groups.

Keywords: cutaneous head and neck squamous cell carcinoma, extra capsular spread, neck lymphadenopathy, TNM 7th and 8th editions

Procedia PDF Downloads 75
310 Entrants’ Knowledge of the Host Country’s Institutional Environments: A Critical Success Factor of International Projects in Emerging Least Developed Countries

Authors: Rameshwar Dahal, S. Ping Ho

Abstract:

Although the demand for infrastructure development forms a promising market opportunity for international firms, the dominance of informal institutions over formal ones, investors are facing extraordinary institutional challenges when investing in emerging Least Developed Countries (LDCs). We believe that, in emerging LDCs, the project performance heavily depends on how well the entrants respond to the challenges exerted by the host institutional environments. Which primarily depends on how much they learn about the host institution and what strategy they apply in response. In Nepal, almost all international or global infrastructure projects are financed by international financers, so the procurement process of the infrastructure projects financed by foreign agencies is guided by the policies and regulations of the financer. Because of limited resources and the financers’ demand, contractors and consults are procured internationally. Moreover, the resources, including but not limited to construction material, manpower, and equipment, also need to be imported. Therefore, the involvement of international companies as an entrant in global infrastructure projects of LDCs is obvious. In a global project (GP), participants from different geographical and institutional environments hold different beliefs and have disparate interests. Therefore, the entrants face the challenges exerted by the host institutional environments. The entrants must either adapt to the institutions prevailing in the environment or resist the institutional pressures. It is hypothesized that, in emerging LDCs, the project performance heavily depends on how much the entrants learn about the host institutional knowledge and how well they respond to the institutional environments. While it is impossible to generalize the phenomenon and contextual conditions because of their vast diversity, this study has answered why and how participants’ level of institutional knowledge impacts the project's implementation performance. To draw that conclusion, firstly, we explored two typical GPs from Nepal. For this study, the data were collected by conducting interviews and examining the secondary data, such as the project reports published by the financers, project data provided by interviewees, and news reports. In an event analysis, firstly, we identify the sources, causes, or nature of the institutional challenges; secondly, we analyze the entrant’s responses to the exerted challenges and evaluate the impacts of the responses on the overall project performance. In this study, at first, the events occurred during the project implementation process have a causal link with the local institutions that demand the entrants’ response are extracted. Secondly, each event is scrutinized as the critical success factor of the case project. Finally, it is crucially examined whether and what institutional knowledge in these events played a critical role in project success or failure. The results also provide insights into the crucial institutional knowledge in LDCs and the subsequent strategy implications for undertaking projects in LDCs.

Keywords: emerging countries, LDC, project management, project performance, institutional knowledge, institutional theory

Procedia PDF Downloads 40
309 The Study of Intangible Assets at Various Firm States

Authors: Gulnara Galeeva, Yulia Kasperskaya

Abstract:

The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.

Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix

Procedia PDF Downloads 186
308 Experience of Two Major Research Centers in the Diagnosis of Cardiac Amyloidosis from Transthyretin

Authors: Ioannis Panagiotopoulos, Aristidis Anastasakis, Konstantinos Toutouzas, Ioannis Iakovou, Charalampos Vlachopoulos, Vasilis Voudris, Georgios Tziomalos, Konstantinos Tsioufis, Efstathios Kastritis, Alexandros Briassoulis, Kimon Stamatelopoulos, Alexios Antonopoulos, Paraskevi Exadaktylou, Evanthia Giannoula, Anastasia Katinioti, Maria Kalantzi, Evangelos Leontiadis, Eftychia Smparouni, Ioannis Malakos, Nikolaos Aravanis, Argyrios Doumas, Maria Koutelou

Abstract:

Introduction: Cardiac amyloidosis from Transthyretin (ATTR-CA) is an infiltrative disease characterized by the deposition of pathological transthyretin complexes in the myocardium. This study describes the characteristics of patients diagnosed with ATTR-CA from 2019 until present at the Nuclear Medicine Department of Onassis Cardiac Surgery Center and AHEPA Hospital. These centers have extensive experience in amyloidosis and modern technological equipment for its diagnosis. Materials and Methods: Records of consecutive patients (N=73) diagnosed with any type of amyloidosis were collected, analyzed, and prospectively followed. The diagnosis of amyloidosis was made using specific myocardial scintigraphy with Tc-99m DPD. Demographic characteristics, including age, gender, marital status, height, and weight, were collected in a database. Clinical characteristics, such as amyloidosis type (ATTR and AL), serum biomarkers (BNP, troponin), electrocardiographic findings, ultrasound findings, NYHA class, aortic valve replacement, device implants, and medication history, were also collected. Some of the most significant results are presented. Results: A total of 73 cases (86% male) were diagnosed with amyloidosis over four years. The mean age at diagnosis was 82 years, and the main symptom was dyspnea. Most patients suffered from ATTR-CA (65 vs. 8 with AL). Out of all the ATTR-CA patients, 61 were diagnosed with wild-type and 2 with two rare mutations. Twenty-eight patients had systemic amyloidosis with extracardiac involvement, and 32 patients had a history of bilateral carpal tunnel syndrome. Four patients had already developed polyneuropathy, and the diagnosis was confirmed by DPD scintigraphy, which is known for its high sensitivity. Among patients with isolated cardiac involvement, only 6 had left ventricular ejection fraction below 40%. The majority of ATTR patients underwent tafamidis treatment immediately after diagnosis. Conclusion: In conclusion, the experiences shared by the two centers and the continuous exchange of information provide valuable insights into the diagnosis and management of cardiac amyloidosis. Clinical suspicion of amyloidosis and early diagnostic approach are crucial, given the availability of non-invasive techniques. Cardiac scintigraphy with DPD can confirm the presence of the disease without the need for a biopsy. The ultimate goal still remains continuous education and awareness of clinical cardiologists so that this systemic and treatable disease can be diagnosed and certified promptly and treatment can begin as soon as possible.

Keywords: amyloidosis, diagnosis, myocardial scintigraphy, Tc-99m DPD, transthyretin

Procedia PDF Downloads 51
307 Petrology and Petrochemistry of Basement Rocks in Ila Orangun Area, Southwestern Nigeria

Authors: Jayeola A. O., Ayodele O. S., Olususi J. I.

Abstract:

From field studies, six (6) lithological units were identified to be common around the study area, which includes quartzites, granites, granite gneiss, porphyritic granites, amphibolite and pegmatites. Petrographical analysis was done to establish the major mineral assemblages and accessory minerals present in selected rock samples, which represents the major rock types in the area. For the purpose of this study, twenty (20) pulverized rock samples were taken to the laboratory for geochemical analysis with their results used in the classification, as well as suggest the geochemical attributes of the rocks. Results from petrographical studies of the rocks under both plane and cross polarized lights revealed the major minerals identified under thin sections to include quartz, feldspar, biotite, hornblende, plagioclase and muscovite with opaque other accessory minerals, which include actinolite, spinel and myrmekite. Geochemical results obtained and interpreted using various geochemical plots or discrimination plots all classified the rocks in the area as belonging to both the peralkaline metaluminous and peraluminous types. Results for the major oxides ratios produced for Na₂O/K₂O, Al₂O₃/Na₂O + CaO + K₂O and Na₂O + CaO + K₂O/Al₂O₃ show the excess of alumina, Al₂O₃ over the alkaline Na₂O +CaO +K₂O thus suggesting peraluminous rocks. While the excess of the alkali over the alumina suggests the peralkaline metaluminous rock type. The results of correlation coefficient show a perfect strong positive correlation, which shows that they are of same geogenic sources, while negative correlation coefficient values indicate a perfect weak negative correlation, suggesting that they are of heterogeneous geogenic sources. From factor analysis, five component groups were identified as Group 1 consists of Ag-Cr-Ni elemental associations suggesting Ag, Cr, and Ni mineralization, predicting the possibility of sulphide mineralization. in the study area. Group ll and lll consist of As-Ni-Hg-Fe-Sn-Co-Pb-Hg element association, which are pathfinder elements to the mineralization of gold. Group 1V and V consist of Cd-Cu-Ag-Co-Zn, which concentrations are significant to elemental associations and mineralization. In conclusion, from the potassium radiometric anomaly map produced, the eastern section (northeastern and southeastern) is observed to be the hot spot and mineralization zone for the study area.

Keywords: petrography, Ila Orangun, petrochemistry, pegmatites, peraluminous

Procedia PDF Downloads 34
306 Generation & Migration Of Carbone Dioxid In The Lower Cretaceous Bahi Sandstone Reservoir Within The En-naga Sub Basin, Sirte Basin, Libya

Authors: Moaawia Abdulgader Gdara

Abstract:

En -Naga sub - basin considered to be the most southern of the concessions in the Sirte Basin operated by HOO. En Naga Sub – basin have likely been point-sourced of CO₂ accumulations during the last 7 million years from local satellite intrusives associated with the Haruj Al Aswad igneous complex. CO2 occurs in the En Naga Sub-basin as a result of the igneous activity of the Al Harouge Al Aswad complex.Igneous extrusive have been pierced in the subsurface are exposed at the surface. The lower cretaceous Bahi Sandstone facies are recognized in the En Naga Sub-basin. They result from the influence of paleotopography on the processes associated with continental deposition over the Sirt Unconformity and the Cenomanian marine transgression In the Lower Cretaceous Bahi Sandstones, the presence of trapped carbon dioxide is proven within the En Naga Sub-basin. This makes it unique in providing an abundance of CO₂ gas reservoirs with almost pure magmatic CO₂, which can be easily sampled. Huge amounts of CO2 exist in the Lower Cretaceous Bahi Sandstones in the En-Naga sub-basin, where the economic value of CO₂ is related to its use for enhanced oil recovery (EOR) Based on the production tests for the drilled wells that makes Lower Cretaceous Bahi sandstones the principle reservoir rocks for CO2 where large volumes of CO2 gas have been discovered in the Bahi Formation on and near EPSA 120/136(En -Naga sub basin). The Bahi sandstones are generally described as a good reservoir rock. Intergranular porosities and permeabilities are highly variable and can exceed 25% and 100 MD. In the (En Naga sub – basin), three main developed structures (Barrut I, En Naga A and En Naga O) are thought to be prospective for the lower Cretaceous Bahi sandstone reservoir. These structures represents a good example for the deep over pressure potential in (En Naga sub - basin). The very high pressures assumed associated with local igneous intrusives may account for the abnormally high Bahi (and Lidam) reservoir pressures. The best gas tests from this facies are at F1-72 on the (Barrut I structure) from part of a 458 feet+ section having an estimated high value of CO2 as 98% overpressured. Bahi CO) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co₂ generation and migration to the bahi sandstone reservoir generation and migration to the bahi sandstone reservoir generation and migration to the bahi sandstone reservoir generation and migration to the bahi sandstone reservoir prospectivity is thought to be excellent in the central to western areas where At U1-72 (En Naga O structure) a significant CO2 gas kick occurred at 11,971 feet and quickly led to blowout conditions due to uncontrollable leaks in the surface equipment. Which reflects a better reservoir quality sandstones associated with Paleostructural highs. Condensate and gas prospectivity increases to the east as the CO₂) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co₂ generation and migration to the bahi sandstone reservoir generation and migration to the bahi sandstone reservoir prospectivity decreases with distance away from the Al Haruj Al Aswad igneous complex. To date, it has not been possible to accurately determine the volume of these strategically valuable reserves although there are positive indications that they are very large.

Keywords: 1) en naga sub basin, 2)al harouge al aswad igneous complex., 3) lower cretaceous bahi reservoir, 4)co2 generation and migration to the bahi sandstone reservoir

Procedia PDF Downloads 42
305 Using Computer Vision and Machine Learning to Improve Facility Design for Healthcare Facility Worker Safety

Authors: Hengameh Hosseini

Abstract:

Design of large healthcare facilities – such as hospitals, multi-service line clinics, and nursing facilities - that can accommodate patients with wide-ranging disabilities is a challenging endeavor and one that is poorly understood among healthcare facility managers, administrators, and executives. An even less-understood extension of this problem is the implications of weakly or insufficiently accommodative design of facilities for healthcare workers in physically-intensive jobs who may also suffer from a range of disabilities and who are therefore at increased risk of workplace accident and injury. Combine this reality with the vast range of facility types, ages, and designs, and the problem of universal accommodation becomes even more daunting and complex. In this study, we focus on the implication of facility design for healthcare workers suffering with low vision who also have physically active jobs. The points of difficulty are myriad and could span health service infrastructure, the equipment used in health facilities, and transport to and from appointments and other services can all pose a barrier to health care if they are inaccessible, less accessible, or even simply less comfortable for people with various disabilities. We conduct a series of surveys and interviews with employees and administrators of 7 facilities of a range of sizes and ownership models in the Northeastern United States and combine that corpus with in-facility observations and data collection to identify five major points of failure common to all the facilities that we concluded could pose safety threats to employees with vision impairments, ranging from very minor to severe. We determine that lack of design empathy is a major commonality among facility management and ownership. We subsequently propose three methods for remedying this lack of empathy-informed design, to remedy the dangers posed to employees: the use of an existing open-sourced Augmented Reality application to simulate the low-vision experience for designers and managers; the use of a machine learning model we develop to automatically infer facility shortcomings from large datasets of recorded patient and employee reviews and feedback; and the use of a computer vision model fine tuned on images of each facility to infer and predict facility features, locations, and workflows, that could again pose meaningful dangers to visually impaired employees of each facility. After conducting a series of real-world comparative experiments with each of these approaches, we conclude that each of these are viable solutions under particular sets of conditions, and finally characterize the range of facility types, workforce composition profiles, and work conditions under which each of these methods would be most apt and successful.

Keywords: artificial intelligence, healthcare workers, facility design, disability, visually impaired, workplace safety

Procedia PDF Downloads 76
304 An EEG-Based Scale for Comatose Patients' Vigilance State

Authors: Bechir Hbibi, Lamine Mili

Abstract:

Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.

Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction

Procedia PDF Downloads 30
303 Visual Design of Walkable City as Sidewalk Integration with Dukuh Atas MRT Station in Jakarta

Authors: Nadia E. Christiana, Azzahra A. N. Ginting, Ardhito Nurcahya, Havisa P. Novira

Abstract:

One of the quickest ways to do a short trip in urban areas is by walking, either individually, in couple or groups. Walkability nowadays becomes one of the parameters to measure the quality of an urban neighborhood. As a Central Business District and public transport transit hub, Dukuh Atas area becomes one of the highest numbers of commuters that pass by the area and interchange between transportation modes daily. Thus, as a public transport hub, a lot of investment should be focused to speed up the development of the area that would support urban transit activity between transportation modes, one of them is revitalizing pedestrian walkways. The purpose of this research is to formulate the visual design concept of 'Walkable City' based on the results of the observation and a series of rankings. To achieve this objective, it is necessary to accomplish several stages of the research that consists of (1) Identifying the system of pedestrian paths in Dukuh Atas area using descriptive qualitative method (2) Analyzing the sidewalk walkability rate according to the perception and the walkability satisfaction rate using the characteristics of pedestrians and non-pedestrians in Dukuh Atas area by using Global Walkability Index analysis and Multicriteria Satisfaction Analysis (3) Analyzing the factors that determine the integration of pedestrian walkways in Dukuh Atas area using descriptive qualitative method. The results achieved in this study is that the walkability level of Dukuh Atas corridor area is 44.45 where the value is included in the classification of 25-49, which is a bit of facility that can be reached by foot. Furthermore, based on the questionnaire, satisfaction rate of pedestrian walkway in Dukuh Atas area reached a number of 64%. It is concluded that commuters have not been fully satisfied with the condition of the sidewalk. Besides, the factors that influence the integration in Dukuh Atas area have been reasonable as it is supported by the utilization of land and modes such as KRL, Busway, and MRT. From the results of all analyzes conducted, the visual design and the application of the concept of walkable city along the pathway pedestrian corridor of Dukuh Atas area are formulated. Achievement of the results of this study amounted to 80% which needs to be done further review of the results of the analysis. The work of this research is expected to be a recommendation or input for the government in the development of pedestrian paths in maximizing the use of public transportation modes.

Keywords: design, global walkability index, mass rapid transit, walkable city

Procedia PDF Downloads 166
302 Diversity and Distribution Ecology of Coprophilous Mushrooms of Family Psathyrellaceae from Punjab, India

Authors: Amandeep Kaur, Ns Atri, Munruchi Kaur

Abstract:

Mushrooms have shaped our environment in ways that we are only beginning to understand. The weather patterns, topography, flora and fauna of Punjab state in India create favorable growing conditions for thousands of species of mushrooms, but the complete region was unexplored when it comes to coprophilous mushrooms growing on herbivorous dung. Coprophilous mushrooms are the most specialized fungi ecologically, which germinate and grow directly on different types of animal dung or on manured soil. In the present work, the diversity of coprophilous mushrooms' of Family Psathyrellaceae of the order Agaricales is explored, their relationship to the human world is sketched out, and their supreme significance to life on this planet is revealed. During the investigation, different dung localities from 16 districts of Punjab state have been explored for the collection of material. The macroscopic features of the collected mushrooms were documented on the Field key. The hand cut sections of the various parts of carpophore, such as pileus, gills, stipe and the basidiospores details, were studied microscopically under different magnification. Various authentic publications were consulted for the identification of the investigated taxa. The classification, authentic names and synonyms of the investigated taxa are as per the latest version of Dictionary of Fungi and the MycoBank. The present work deals with the taxonomy of 81 collections belonging to 39 species spread over 05 coprophilous genera, namely Psathyrella, Panaeolus, Parasola, Coprinopsis, and Coprinellus of family Psathyrellaceae. In the text, the investigated taxa have been arranged as they appear in the key to the genera and species investigated. In this work, have been thoroughly examined for their macroscopic, microscopic, ecological, and chemical reaction details. The authors dig deeper to give indication of their ecology and the dung type where they can be obtained. Each taxon is accompanied by a detailed listing of its prominent features and an illustration with habitat photographs and line drawings of morphological and anatomical features. Taxa are organized as per their status in the keys, which allow easy recognition. All the taxa are compared with similar taxa. The study has shown that dung is an important substrate which serves as a favorable niche for the growth of a variety of mushrooms. This paper shows an insight what short-lived coprophilous mushrooms can teach us about sustaining life on earth!

Keywords: abundance, basidiomycota, biodiversity, seasonal availability, systematics

Procedia PDF Downloads 39
301 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 246
300 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 116
299 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 306