Search results for: classical methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15608

Search results for: classical methods

11918 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices

Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays

Abstract:

Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.

Keywords: ecological momentary assessment, real-time, stress, work

Procedia PDF Downloads 142
11917 A Review of Applying Serious Games on Learning

Authors: Carlos Oliveira, Ulrick Pimentel

Abstract:

Digital games have conquered a growing space in the lives of children, adolescents and adults. In this perspective, the use of this resource has shown to be an important strategy that facilitates the learning process. This research is a literature review on the use of serious games in teaching, which shows the characteristics of these games, the benefits and possible harms that this resource can produce, in addition to the possible methods of evaluating the effectiveness of this resource in teaching. The results point out that Serious Games have significant potential as a tool for instruction. However, their effectiveness in terms of learning outcomes is still poorly studied, mainly due to the complexity involved in evaluating intangible measures.

Keywords: serious games, learning, application, literature review

Procedia PDF Downloads 294
11916 Efficacy of Coconut Shell Pyrolytic Oil Distillate in Protecting Wood Against Bio-Deterioration

Authors: K. S. Shiny, R. Sundararaj

Abstract:

Coconut trees (Cocos nucifera L.) are grown in many parts of India and world because of its multiple utilities. During pyrolysis, coconut shells yield oil, which is a dark thick liquid. Upon simple distillation it produces a more or less colourless liquid, termed coconut shell pyrolytic oil distillate (CSPOD). This manuscript reports and discusses the use of coconut shell pyrolytic oil distillate as a potential wood protectant against bio-deterioration. Since botanical products as ecofriendly wood protectant is being tested worldwide, the utilization of CPSOD as wood protectant is of great importance. The efficacy of CSPOD as wood protectant was evaluated as per Bureau of Indian Standards (BIS) in terms of its antifungal, antiborer, and termiticidal activities. Specimens of Rubber wood (Hevea brasiliensis) in six replicate each for two treatment methods namely spraying and dipping (48hrs) were employed. CSPOD was found to impart total protection against termites for six months compared to control under field conditions. For assessing the efficacy of CSPOD against fungi, the treated blocks were subjected to the attack of two white rot fungi Tyromyces versicolor (L.) Fr. and Polyporus sanguineus (L.) G. Mey and two brown rot fungi, Polyporus meliae (Undrew.) Murrill. and Oligoporus placenta (Fr.) Gilb. & Ryvarden. Results indicated that treatment with CSPOD significantly protected wood from the damage caused by the decay fungi. Efficacy of CSPOD against wood borer Lyctus africanus Lesne was carried out using six pairs of male and female beetles and it gave promising results in protecting the treated wood blocks when compared to control blocks. As far as the treatment methods were concerned, dip treatment was found to be more effective when compared to spraying. The results of the present investigation indicated that CSPOD is a promising botanical compound which has the potential to replace synthetic wood protectants. As coconut shell, pyrolytic oil is a waste byproduct of coconut shell charcoal industry, its utilization as a wood preservative will expand the economic returns from such industries.

Keywords: coconut shell pyrolytic oil distillate, eco-friendly wood protection, termites, wood borers, wood decay fungi

Procedia PDF Downloads 356
11915 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows

Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld

Abstract:

Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.

Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV

Procedia PDF Downloads 76
11914 Nose Macroneedling Tie Suture Hidden Technique

Authors: Mohamed Ghoz, Hala Alsabeh

Abstract:

Context: Macroscopic Nose Macroneedling (MNM) is a new non-surgical procedure for lifting and tightening the nose. It is a tissue-non-invasive technique that uses a needle to create micro-injuries in the skin. These injuries stimulate the production of collagen and elastin, which results in the tightening and lifting of the skin. Research Aim: The research aim of this study was to investigate the efficacy and safety of MNM for the treatment of nasal deformities. Methodology A total of 100 patients with nasal deformities were included in this study. The patients were randomly assigned to either the MNM group or the control group. The MNM group received a single treatment of MNM, while the control group received no treatment. The patients were evaluated at baseline, 6 months, and 12 months after treatment. Findings: The results of this study showed that MNM was effective in improving the appearance of the nose in patients with nasal deformities. At 6 months after treatment, the patients in the MNM group had significantly improved nasal tip projection, nasal bridge height, and nasal width compared to the patients in the control group. The improvements in nasal appearance were maintained at 12 months after treatment. Theoretical Importance: The findings of this study provide support for the use of MNM as a safe and effective treatment for nasal deformities. MNM is a non-surgical procedure that is associated with minimal downtime and no risk of scarring. This makes it an attractive option for patients who are looking for a minimally invasive treatment for their nasal deformities. Data Collection: Data was collected from the patients using a variety of methods, including clinical assessments, photographic assessments, and patient-reported outcome measures. Analysis Procedures: The data was analyzed using a variety of statistical methods, including descriptive statistics, inferential statistics, and meta-analysis. Question Addressed: The research question addressed in this study was whether MNM is an effective and safe treatment for nasal deformities. Conclusion: The findings of this study suggest that MNM is an effective and safe treatment for nasal deformities. MNM is a non-surgical procedure that is associated with minimal downtime and no risk of scarring. This makes it an attractive option for patients who are looking for a minimally invasive treatment for their nasal deformities.

Keywords: nose, surgery, tie, suture

Procedia PDF Downloads 61
11913 Long-Term Results of Surgical Treatment of Atrial Fibrillation in Patients with Coronary Heart Disease: One Center Experience

Authors: Emil Sakharov, Alex Zotov, Ilkin Osmanov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Since 2015, our center has been actively implementing methods of surgical correction of atrial fibrillation, in particular, in patients with coronary heart disease. The study presents a comparative analysis of the late postoperative period in patients with coronary artery bypass grafting and atrial fibrillation. Methods: The study included 150 patients with ischemic heart disease and atrial fibrillation for the period from 2015 to 2021. Patients were divided into 2 groups. The first group is represented by patients with ischemic heart disease and atrial fibrillation who underwent coronary bypass surgery and surgical correction of atrial fibrillation (N=50). The second group is represented by patients with ischemic heart disease and atrial fibrillation who underwent only myocardial revascularization (N=100). Patients were comparable in age, gender, and initial severity of the condition. Among the patients in group 1 there were 82% were men, while in the second group, their number was 75%. Among the patients of the first group, there were 36% with persistent atrial fibrillation, 20% with long-term persistent atrial fibrillation. In the second group, 10% with persistent atrial fibrillation and 17% with long-term persistent atrial fibrillation. Results: Average follow-up for groups 1 and 2 amounted to 47 months. There were no complications in group 1, such as bleeding and stroke. There was only 1 patient in group 1, who had died from cardiovascular disease. Freedom of atrial fibrillation was in 82% without AADs therapy. In group 2 there were 8 patients who had died from cardiovascular diseases and total freedom of atrial fibrillation was in 35% of patients, among which 42.8% had additional AADs therapy. Follow-up data are presented in Table 2. Progression of heart failure was observed in 3% in group 1 and 7% in group 2. Combined endpoints (recurrence of AF, stroke, progression of heart failure, myocardial infarction) were achieved in 16% in group 1 and 34% in group 2, respectively. Freedom from atrial fibrillation without antiarrhythmic therapy was 82% for group 1 and 35% for group 2. In the first group, there is a more pronounced decrease in heart failure rates. Deaths from cardiovascular causes were recorded in 2% for group 1 and 7% for group 2. Conclusion: Surgical treatment of atrial fibrillation helps to reduce adverse complications in the late postoperative period and contributes to the regression of heart failure.

Keywords: atrial fibrillation, coronary artery bypass grafting, ischaemic heart disease, heart failure

Procedia PDF Downloads 107
11912 Quo Vadis, European Football: An Analysis of the Impact of Over-The-Top Services in the Sports Rights Market

Authors: Farangiz Davranbekova

Abstract:

Subject: The study explores the impact of Over-the-Top services in the sports rights market, focusing on football games. This impact is analysed in the big five European football markets. The research entails how the pay-TV market is combating the disruptors' entry, how the fans are adjusting to these changes and how leagues and football clubs are orienting in the transitional period of more choice. Aims and methods: The research aims to offer a general overview of the impact of OTT players in the football rights market. A theoretical framework of Jenkins’ five layers of convergence is implemented to analyse the transition the sports rights market is witnessing from various angles. The empirical analysis consists of secondary research data as and seven expert interviews from three different clusters. The findings are bound by the combination of the two methods offering general statements. Findings: The combined secondary data as well as expert interviews, conducted on five layers of convergence found: 1. Technological convergence presents that football content is accessible through various devices with innovative digital features, unlike the traditional TV set box. 2. Social convergence demonstrates that football fans multitask using various devices on social media when watching the games. These activities are complementary to traditional TV viewing. 3. Cultural convergence points that football fans have a new layer of fan engagement with leagues, clubs and other fans using social media. Additionally, production and consumption lines are blurred. 4. Economic convergence finds that content distribution is diversifying and/or eroding. Consumers now have more choices, albeit this can be harmful to them. Entry barriers are decreased, and bigger clubs feel more powerful. 5. Global convergence shows that football fans are engaging with not only local fans but with fans around the world that social media sites enable. Recommendation: A study on smaller markets such as Belgium or the Netherlands would benefit the study on the impact of OTT. Additionally, examination of other sports will shed light on this matter. Lastly, once the direct-to-consumer model is fully taken off in Europe, it will be of importance to examine the impact of such transformation in the market.

Keywords: sports rights, OTT, pay TV, football

Procedia PDF Downloads 143
11911 Development of a Value Evaluation Model of Highway Box-Girder Bridge

Authors: Hao Hsi Tseng

Abstract:

Taiwan’s infrastructure is gradually deteriorating, while resources for maintenance and replacement are increasingly limited, raising the urgent need for methods for maintaining existing infrastructure within constrained budgets. Infrastructure value evaluation is used to enhance the efficiency of infrastructure maintenance work, allowing administrators to quickly assess the maintenance needs and performance by observing variation in infrastructure value. This research establishes a value evaluation model for Taiwan’s highway box girder bridges. The operating mechanism and process of the model are illustrated in a practical case.

Keywords: box girder bridge, deterioration, infrastructure, maintenance, value evaluation

Procedia PDF Downloads 176
11910 Symbolic Computation and Abundant Travelling Wave Solutions to Modified Burgers' Equation

Authors: Muhammad Younis

Abstract:

In this article, the novel (G′/G)-expansion method is successfully applied to construct the abundant travelling wave solutions to the modified Burgers’ equation with the aid of computation. The method is reliable and useful, which gives more general exact travelling wave solutions than the existing methods. These obtained solutions are in the form of hyperbolic, trigonometric and rational functions including solitary, singular and periodic solutions which have many potential applications in physical science and engineering. Some of these solutions are new and some have already been constructed. Additionally, the constraint conditions, for the existence of the solutions are also listed.

Keywords: traveling wave solutions, NLPDE, computation, integrability

Procedia PDF Downloads 420
11909 Understanding Help Seeking among Black Women with Clinically Significant Posttraumatic Stress Symptoms

Authors: Glenda Wrenn, Juliet Muzere, Meldra Hall, Allyson Belton, Kisha Holden, Chanita Hughes-Halbert, Martha Kent, Bekh Bradley

Abstract:

Understanding the help seeking decision making process and experiences of health disparity populations with posttraumatic stress disorder (PTSD) is central to development of trauma-informed, culturally centered, and patient focused services. Yet, little is known about the decision making process among adult Black women who are non-treatment seekers as they are, by definition, not engaged in services. Methods: Audiotaped interviews were conducted with 30 African American adult women with clinically significant PTSD symptoms who were engaged in primary care, but not in treatment for PTSD despite symptom burden. A qualitative interview guide was used to elucidate key themes. Independent coding of themes mapped to theory and identification of emergent themes were conducted using qualitative methods. An existing quantitative dataset was analyzed to contextualize responses and provide a descriptive summary of the sample. Results: Emergent themes revealed that active mental avoidance, the intermittent nature of distress, ambivalence, and self-identified resilience as undermining to help seeking decisions. Participants were stuck within the help-seeking phase of ‘recognition’ of illness and retained a sense of “it is my decision” despite endorsing significant social and environmental negative influencers. Participants distinguished ‘help acceptance’ from ‘help seeking’ with greater willingness to accept help and importance placed on being of help to others. Conclusions: Elucidation of the decision-making process from the perspective of non-treatment seekers has implications for outreach and treatment within models of integrated and specialty systems care. The salience of responses to trauma symptoms and stagnation in the help seeking recognition phase are findings relevant to integrated care service design and community engagement.

Keywords: culture, help-seeking, integrated care, PTSD

Procedia PDF Downloads 224
11908 Use of Magnesium as a Renewable Energy Source

Authors: Rafayel K. Kostanyan

Abstract:

The opportunities of use of metallic magnesium as a generator of hydrogen gas, as well as thermal and electric energy is presented in the paper. Various schemes of magnesium application are discussed and power characteristics of corresponding devices are presented. Economic estimation of hydrogen price obtained by different methods is made, including the use of magnesium as a source of hydrogen for transportation in comparison with gasoline. Details and prospects of our new inexpensive technology of magnesium production from magnesium hydroxide and magnesium bearing rocks (which are available worldwide and in Armenia) are analyzed. It is estimated the threshold cost of Mg production at which application of this metal in power engineering is economically justified.

Keywords: energy, electrodialysis, magnesium, new technology

Procedia PDF Downloads 257
11907 Improved Approach to the Treatment of Resistant Breast Cancer

Authors: Lola T. Alimkhodjaeva, Lola T. Zakirova, Soniya S. Ziyavidenova

Abstract:

Background: Breast cancer (BC) is still one of the urgent oncology problems. The essential obstacle to the full anti-tumor therapy implementation is drug resistance development. Taking into account the fact that chemotherapy is main antitumor treatment in BC patients, the important task is to improve treatment results. Certain success in overcoming this situation has been associated with the use of methods of extracorporeal blood treatment (ECBT), plasmapheresis. Materials and Methods: We examined 129 women with resistant BC stages 3-4, aged between 56 to 62 years who had previously received 2 courses of CAF chemotherapy. All patients additionally underwent 2 courses of CAF chemotherapy but against the background ECBT with ultrasonic exposure. We studied the following parameters: 1. The highlights of peripheral blood before and after therapy. 2. The state of cellular immunity and identification of activation markers CD23 +, CD25 +, CD38 +, CD95 + on lymphocytes was performed using monoclonal antibodies. Evaluation of humoral immunity was determined by the level of main classes of immunoglobulins IgG, IgA, IgM in serum. 3. The degree of tumor regression was assessed by WHO recommended 4 gradations. (complete - 100%, partial - more than 50% of initial size, process stabilization–regression is less than 50% of initial size and tumor advance progressing). 4. Medical pathomorphism in the tumor was determined by Lavnikova. 5. The study of immediate and remote results, up to 3 years and more. Results and Discussion: After performing extracorporeal blood treatment anemia occurred in 38.9%, leukopenia in 36.8%, thrombocytopenia in 34.6%, hypolymphemia in 26.8%. Studies of immunoglobulin fractions in blood serum were able to establish a certain relationship between the classes of immunoglobulin A, G, M and their functions. The results showed that after treatment the values of main immunoglobulins in patients’ serum approximated to normal. Analysis of expression of activation markers CD25 + cells bearing receptors for IL-2 (IL-2Rα chain) and CD95 + lymphocytes that were mediated physiological apoptosis showed the tendency to increase, which apparently was due to activation of cellular immunity cytokines allocated by ultrasonic treatment. To carry out ECBT on the background of ultrasonic treatment improved the parameters of the immune system, which were expressed in stimulation of cellular immunity and correcting imbalances in humoral immunity. The key indicator of conducted treatment efficiency is the immediate result measured by the degree of tumor regression. After ECBT performance the complete regression was 10.3%, partial response - 55.5%, process stabilization - 34.5%, tumor advance progressing no observed. Morphological investigations of tumor determined therapeutic pathomorphism grade 2 in 15%, in 25% - grade 3 and therapeutic pathomorphism grade 4 in 60% of patients. One of the main criteria for the effect of conducted treatment is to study the remission terms in the postoperative period (up to 3 years or more). The remission terms up to 3 years with ECBT was 34.5%, 5-year survival was 54%. Carried out research suggests that a comprehensive study of immunological and clinical course of breast cancer allows the differentiated approach to the choice of methods for effective treatment.

Keywords: breast cancer, immunoglobulins, extracorporeal blood treatment, chemotherapy

Procedia PDF Downloads 262
11906 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 85
11905 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 504
11904 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 154
11903 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 244
11902 Turning Points in the Development of Translator Training in the West from the 1980s to the Present

Authors: B. Sayaheen

Abstract:

The translator’s competence is one of the topics that has received a great deal of research in the field of translation studies because such competencies are still debatable and not yet agreed upon. Besides, scholars tackle this topic from different points of view. Approaches to teaching these competencies have gone through some developments. This paper aims at investigating these developments, exploring the major turning points and shifts in the developments of teaching methods in translator training. The significance of these turning points and the external or internal causes will also be discussed. Based on the past and present status of teaching approaches in translator training, this paper tries to predict the future of these approaches. This paper is mainly concerned with developments of teaching approaches in the West since the 1980s to the present. The reason behind choosing this specific period is not because translator training started in the 1980s but because most criticism of the teacher-centered approach started at that time. The implications of this research stem from the fact that it identifies the turning points and the causes that led teachers to adopt student-centered approaches rather than teacher-centered approaches and then to incorporate technology and the Internet in translator training. These reasons were classified as external or internal reasons. Translation programs in the West and in other cultures can benefit from this study. Translation programs in the West can notice that teaching translation is geared toward incorporating more technologies. If these programs already use technology and the Internet to teach translation, they might benefit from the assumed future direction of teaching translation. On the other hand, some non-Western countries, and to be specific some professors, are still applying the teacher-centered approach. Moreover, these programs should include technology and the Internet in their teaching approaches to meet the drastic changes in the translation process, which seems to rely more on software and technologies to accomplish the translator’s tasks. Finally, translator training has borrowed many of its approaches from other disciplines, mainly language teaching. The teaching approaches in translator training have gone through some developments, from teacher-centered to student-centered and then toward the integration of technologies and the Internet. Both internal and external causes have played a crucial role in these developments. These borrowed approaches should be comprehensively evaluated in order to see if they achieve the goals of translator training. Such evaluation may lead us to come up with new teaching approaches developed specifically for translator training. While considering these methods and designing new approaches, we need to keep an eye on the future needs of the market.

Keywords: turning points, developments, translator training, market, The West

Procedia PDF Downloads 104
11901 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 379
11900 Optimized Parameters for Simultaneous Detection of Cd²⁺, Pb²⁺ and CO²⁺ Ions in Water Using Square Wave Voltammetry on the Unmodified Glassy Carbon Electrode

Authors: K. Sruthi, Sai Snehitha Yadavalli, Swathi Gosh Acharyya

Abstract:

Water is the most crucial element for sustaining life on earth. Increasing water pollution directly or indirectly leads to harmful effects on human life. Most of the heavy metal ions are harmful in their cationic form. These heavy metal ions are released by various activities like disposing of batteries, industrial wastes, automobile emissions, and soil contamination. Ions like (Pb, Co, Cd) are carcinogenic and show many harmful effects when consumed more than certain limits proposed by WHO. The simultaneous detection of the heavy metal ions (Pb, Co, Cd), which are highly toxic, is reported in this study. There are many analytical methods for quantifying, but electrochemical techniques are given high priority because of their sensitivity and ability to detect and recognize lower concentrations. Square wave voltammetry was preferred in electrochemical methods due to the absence of background currents which is interference. Square wave voltammetry was performed on GCE for the quantitative detection of ions. Three electrode system consisting of a glassy carbon electrode as the working electrode (3 mm diameter), Ag/Agcl electrode as the reference electrode, and a platinum wire as the counter electrode was chosen for experimentation. The mechanism of detection was done by optimizing the experimental parameters, namely pH, scan rate, and temperature. Under the optimized conditions, square wave voltammetry was performed for simultaneous detection. Scan rates were varied from 5 mV/s to 100 mV/s and found that at 25 mV/s all the three ions were detected simultaneously with proper peaks at particular stripping potential. The variation of pH from 3 to 8 was done where the optimized pH was taken as pH 5 which holds good for three ions. There was a decreasing trend at starting because of hydrogen gas evolution, and after pH 5 again there was a decreasing trend that is because of hydroxide formation on the surface of the working electrode (GCE). The temperature variation from 25˚C to 45˚C was done where the optimum temperature concerning three ions was taken as 35˚C. Deposition and stripping potentials were given as +1.5 V and -1.5 V, and the resting time of 150 seconds was given. Three ions were detected at stripping potentials of Cd²⁺ at -0.84 V, Pb²⁺ at -0.54 V, and Co²⁺ at -0.44 V. The parameters of detection were optimized on a glassy carbon electrode for simultaneous detection of the ions at lower concentrations by square wave voltammetry.

Keywords: cadmium, cobalt, lead, glassy carbon electrode, square wave anodic stripping voltammetry

Procedia PDF Downloads 99
11899 Advancing Customer Service Management Platform: Case Study of Social Media Applications

Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal

Abstract:

Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.

Keywords: brands, customer service, information, social media

Procedia PDF Downloads 250
11898 An Object-Based Image Resizing Approach

Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai

Abstract:

Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

Keywords: energy map, visual saliency, gradient map, seam carving

Procedia PDF Downloads 470
11897 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 323
11896 Assessment and Optimisation of Building Services Electrical Loads for Off-Grid or Hybrid Operation

Authors: Desmond Young

Abstract:

In building services electrical design, a key element of any project will be assessing the electrical load requirements. This needs to be done early in the design process to allow the selection of infrastructure that would be required to meet the electrical needs of the type of building. The type of building will define the type of assessment made, and the values applied in defining the maximum demand for the building, and ultimately the size of supply or infrastructure required, and the application that needs to be made to the distribution network operator, or alternatively to an independent network operator. The fact that this assessment needs to be undertaken early in the design process provides limits on the type of assessment that can be used, as different methods require different types of information, and sometimes this information is not available until the latter stages of a project. A common method applied in the earlier design stages of a project, typically during stages 1,2 & 3, is the use of benchmarks. It is a possibility that some of the benchmarks applied are excessive in relation to the current loads that exist in a modern installation. This lack of accuracy is based on information which does not correspond to the actual equipment loads that are used. This includes lighting and small power loads, where the use of more efficient equipment and lighting has reduced the maximum demand required. The electrical load can be used as part of the process to assess the heat generated from the equipment, with the heat gains from other sources, this feeds into the sizing of the infrastructure required to cool the building. Any overestimation of the loads would contribute to the increase in the design load for the heating and ventilation systems. Finally, with the new policies driving the industry to decarbonise buildings, a prime example being the recently introduced London Plan, loads are potentially going to increase. In addition, with the advent of the pandemic and changes to working practices, and the adoption of electric heating and vehicles, a better understanding of the loads that should be applied will aid in ensuring that infrastructure is not oversized, as a cost to the client, or undersized to the detriment of the building. In addition, more accurate benchmarks and methods will allow assessments to be made for the incorporation of energy storage and renewable technologies as these technologies become more common in buildings new or refurbished.

Keywords: energy, ADMD, electrical load assessment, energy benchmarks

Procedia PDF Downloads 99
11895 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 425
11894 Protection of the Object of the Critical Infrastructure in the Czech Republic

Authors: Michaela Vašková

Abstract:

With the increasing dependence of countries on the critical infrastructure, it increases their vulnerability. Big threat is primarily in the human factor (personnel of the critical infrastructure) and in terrorist attacks. It emphasizes the development of methodology for searching of weak points and their subsequent elimination. This article discusses methods for the analysis of safety in the objects of critical infrastructure. It also contains proposal for methodology for training employees of security services in the objects of the critical infrastructure and developing scenarios of attacks on selected objects of the critical infrastructure.

Keywords: critical infrastructure, object of critical infrastructure, protection, safety, security, security audit

Procedia PDF Downloads 329
11893 Primes as Sums and Differences of Two Binomial Coefficients and Two Powersums

Authors: Benjamin Lee Warren

Abstract:

Many problems exist in additive number theory which is essential to determine the primes that are the sum of two elements from a given single-variable polynomial sequence, and most of them are unattackable in the present day. Here, we determine solutions for this problem to a few certain sequences (certain binomial coefficients and power sums) using only elementary algebra and some algebraic factoring methods (as well as Euclid’s Lemma and Faulhaber’s Formula). In particular, we show that there are finitely many primes as sums of two of these types of elements. Several cases are fully illustrated, and bounds are presented for the cases not fully illustrated.

Keywords: binomial coefficients, power sums, primes, algebra

Procedia PDF Downloads 88
11892 Gender Identification Using Digital Forensics

Authors: Vinod C. Nayak

Abstract:

In day-to-day forensic practice, identification is always a difficult task. Availability of anti-mortem and postmortem records plays a major rule in facilitating this tough task. However, the advent of digital forensic is a boon for forensic experts. This study has made use of digital forensics to establish identity by radiological dimensions of maxillary sinus using workstation software. The findings suggest a significant association between maxillary sinus dimensions and human gender. The author will be discussing the methods and results of the study in this e-poster.

Keywords: digital forensics, identification, maxillary sinus, radiology

Procedia PDF Downloads 405
11891 Low-Surface Roughness and High Optical Quality CdS Thin Film Deposited on Heated Substrate Using Room-Temperature Chemical Solution

Authors: A. Elsayed, M. H. Dewaidar, M. Ghali, M. Elkemary

Abstract:

The high production cost of the conventional solar cells requires the search for economic methods suitable for solar energy conversion. Cadmium Sulfide (CdS) is one of the most important semiconductors used in photovoltaics, especially in large area solar cells; and can be prepared in a thin film form by a wide variety of deposition techniques. The preparation techniques include vacuum evaporation, sputtering and molecular beam epitaxy. Other techniques, based on chemical solutions, are also used for depositing CdS films with dramatically low-cost compared to other vacuum-based methods. Although this technique is widely used during the last decades, due to simplicity and low-deposition temperature (~100°C), there is still a strong need for more information on the growth process and its relation with the quality of the deposited films. Here, we report on deposition of high-quality CdS thin films; with low-surface roughness ( < 3.0 nm) and sharp optical absorption edge; on low-temperature glass substrates (70°C) using a new method based on the room-temperature chemical solution. In this method, a mixture solution of cadmium acetate and thiourea at room temperature was used under special growth conditions for deposition of CdS films. X-ray diffraction (XRD) measurements were used to examine the crystal structure properties of the deposited CdS films. In addition, UV-VIS transmittance and low-temperature (4K) photoluminescence (PL) measurements were performed for quantifying optical properties of the deposited films. The deposited films show high optical quality as confirmed by observation of both, sharp edge in the transmittance spectra and strong PL intensity at room temperature. Furthermore, we found a strong effect of the growth conditions on the optical band gap of the deposited films; where remarkable red-shift in the absorption edge with temperature is clearly seen in both transmission and PL spectra. Such tuning of both optical band gap of the deposited CdS films can be utilized for tuning the electronic bands' alignments between CdS and other light-harvesting materials, like CuInGaSe or CdTe, for potential improvement in the efficiency of solar cells devices based on these heterostructures.

Keywords: chemical deposition, CdS, optical properties, surface, thin film

Procedia PDF Downloads 152
11890 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 91
11889 Feigenbaum Universality, Chaos and Fractal Dimensions in Discrete Dynamical Systems

Authors: T. K. Dutta, K. K. Das, N. Dutta

Abstract:

The salient feature of this paper is primarily concerned with Ricker’s population model: f(x)=x e^(r(1-x/k)), where r is the control parameter and k is the carrying capacity, and some fruitful results are obtained with the following objectives: 1) Determination of bifurcation values leading to a chaotic region, 2) Development of Statistical Methods and Analysis required for the measure of Fractal dimensions, 3) Calculation of various fractal dimensions. These results also help that the invariant probability distribution on the attractor, when it exists, provides detailed information about the long-term behavior of a dynamical system. At the end, some open problems are posed for further research.

Keywords: Feigenbaum universality, chaos, Lyapunov exponent, fractal dimensions

Procedia PDF Downloads 292