Search results for: approximate bayesian computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1154

Search results for: approximate bayesian computation

224 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 98
223 Simulating Human Behavior in (Un)Built Environments: Using an Actor Profiling Method

Authors: Hadas Sopher, Davide Schaumann, Yehuda E. Kalay

Abstract:

This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.

Keywords: agent based modeling, architectural design evaluation, event modeling, human behavior simulation, spatial cognition

Procedia PDF Downloads 264
222 Waste Water Treatment by Moringa oleifera Seed Powder in Historical Jalmahal Lake Located in Semi-Arid Monsoon Zone of India

Authors: Pomila Sharma

Abstract:

The rapid urbanization in India was not accompanied by the establishment of waste water treatment facility at similar and same pace. The inland fresh water ecosystem is increasingly subjected to great stress from various human activities. Jalmahal Lake is located in Jaipur city of Rajasthan state; the lake was constructed about 400 years ago and surrounded by hills. The lake was approximately 139 hectare in full spread and has catchment area of 23.5 sq. kilometer. Out of the total catchment area approximate 40% falls inside dense urban area of Jaipur city. During the showers, the treated and untreated waste waters and runoff waters get mixed and enter the lake through the various influx channels, and the lake water quality gets affected by the inflow of waste water. The main objective of this work was to use the Moringa oleifera seeds as a natural adsorbent for the treatment of wastewater in lake. Moringa oleifera is a tropical, multipurpose tree whose seeds contain high-quality edible oil 40% by weight and water soluble, non-toxic protein that act as an effective coagulant for the removal of organic matter in water and waste water treatment. Laboratory Jar test procedure had been used for coagulation studies; an experiment runs using lake water. Water extracts/powder of Moringa seed applied to treat polluted water of lake. In present study various doses of Moringa oleifera seed coagulant viz. 100 mg/L, 200 mg/L, and 400 mg/L were taken and checked for the efficiency dose on treated and untreated polluted water. Turbidity and color removal is one of the important steps in a waste water treatment processes. The results indicate significant reduction in turbidity and color. Standard plate count was significantly reduced fecal coliform levels too. All parameters were reduced with the increased dose of Moringa oleifera. It was clear from the study Moringa oleifera seed was shown to be a potential bio-coagulant, for treatment of sewage laden polluted water in the lake.

Keywords: coagulant, Moringa oleifera, plate count, turbidity, wastewater

Procedia PDF Downloads 410
221 A Coupled Stiffened Skin-Rib Fully Gradient Based Optimization Approach for a Wing Box Made of Blended Composite Materials

Authors: F. Farzan Nasab, H. J. M. Geijselaers, I. Baran, A. De Boer

Abstract:

A method is introduced for the coupled skin-rib optimization of a wing box where mass minimization is the objective and local buckling is the constraint. The structure is made of composite materials where continuity of plies in multiple adjacent panels (blending) has to be satisfied. Blending guarantees the manufacturability of the structure; however, it is a highly challenging constraint to treat and has been under debate in recent research in the same area. To fulfill design guidelines with respect to symmetry, balance, contiguity, disorientation and percentage rule of the layup, a reference for the stacking sequences (stacking sequence table or SST) is generated first. Then, an innovative fully gradient-based optimization approach in relation to a specific SST is introduced to obtain the optimum thickness distribution all over the structure while blending is fulfilled. The proposed optimization approach aims to turn the discrete optimization problem associated with the integer number of plies into a continuous one. As a result of a wing box deflection, a rib is subjected to load values which vary nonlinearly with the amount of deflection. The bending stiffness of a skin affects the wing box deflection and thus affects the load applied to a rib. This indicates the necessity of a coupled skin-rib optimization approach for a more realistic optimized design. The proposed method is examined with the optimization of the layup of a composite stiffened skin and rib of a wing torsion box subjected to in-plane normal and shear loads. Results show that the method can successfully prescribe a valid design with a significantly cheap computation cost.

Keywords: blending, buckling optimization, composite panels, wing torsion box

Procedia PDF Downloads 409
220 The Maps of Meaning (MoM) Consciousness Theory

Authors: Scott Andersen

Abstract:

Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.

Keywords: consciousness, perception, prospection, embodiment

Procedia PDF Downloads 59
219 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 280
218 An Extensive Review of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agriculture, hydrological

Procedia PDF Downloads 41
217 ChaQra: A Cellular Unit of the Indian Quantum Network

Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh

Abstract:

Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.

Keywords: quantum network, quantum key distribution, quantum security, quantum information

Procedia PDF Downloads 56
216 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
215 Prescription of Lubricating Eye Drops in the Emergency Eye Department: A Quality Improvement Project

Authors: Noorulain Khalid, Unsaar Hayat, Muhammad Chaudhary, Christos Iosifidis, Felipe Dhawahir-Scala, Fiona Carley

Abstract:

Dry eye disease (DED) is a common condition seen in the emergency eye department (EED) at Manchester Royal Eye Hospital (MREH). However, there is variability in the prescription of lubricating eye drops among different healthcare providers. The aim of this study was to develop an up-to-date, standardized algorithm for the prescription of lubricating eye drops in the EED at MREH based on international and national guidelines. The study also aimed to assess the impact of implementing the guideline on the rate of inappropriate lubricant prescriptions. Primarily, the impact was to be assessed in the form of the appropriateness of prescriptions for patients’ DED. The impact was secondary to be assessed through analysis of the cost to the hospital. Data from 845 patients who attended the EED over a 3-month period were analyzed, and 157 patients met the inclusion and exclusion criteria. After conducting a review of the literature and collaborating with the corneal team, an algorithm for the prescription of lubricants in the EED was developed. Three plan-do-study-act (PDSA) cycles were conducted, with interventions such as emails, posters, in-person reminders, and education for incoming trainees. The appropriateness of prescriptions was evaluated against the guidelines. Data were collected from patient records and analyzed using statistical methods. The appropriateness of prescriptions was assessed by comparing them to the guidelines and by clinical correlation with a specialized registrar. The study found a substantial improvement in the number of appropriate prescriptions, with an increase from 55% to 93% over the three PDSA cycles. There was additionally a 51% reduction in expenditure on lubricant prescriptions, resulting in cost savings for the hospital (approximate saving of £50/week). Theoretical importance: Appropriate prescription of lubricating eye drops improves disease management for patients and reduces costs for the hospital. The development and implementation of a standardized guideline facilitate the achievement of these goals. Conclusion: This study highlights the inconsistent management of DED in the EED and the potential lack of training in this area for healthcare providers. The implementation of a standardized, easy-to-follow guideline for lubricating eye drops can help to improve disease management while also resulting in cost savings for the hospital.

Keywords: lubrication, dry eye disease, guideline, prescription

Procedia PDF Downloads 72
214 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis

Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas

Abstract:

Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum

Procedia PDF Downloads 161
213 Effect of Varying Zener-Hollomon Parameter (Temperature and Flow Stress) and Stress Relaxation on Creep Response of Hot Deformed AA3104 Can Body Stock

Authors: Oyindamola Kayode, Sarah George, Roberto Borrageiro, Mike Shirran

Abstract:

A phenomenon identified by our industrial partner has experienced sag on AA3104 can body stock (CBS) transfer bar during transportation of the slab from the breakdown mill to the finishing mill. Excessive sag results in bottom scuffing of the slab onto the roller table, resulting in surface defects on the final product. It has been found that increasing the strain rate on the breakdown mill final pass results in a slab resistant to sag. The creep response for materials hot deformed at different Zener–Holloman parameter values needs to be evaluated experimentally to gain better understanding of the operating mechanism. This study investigates this identified phenomenon through laboratory simulation of the breakdown mill conditions for various strain rates by utilizing the Gleeble at UCT Centre for Materials Engineering. The experiment will determine the creep response for a range of conditions as well as quantifying the associated material microstructure (sub-grain size, grain structure etc). The experimental matrices were determined based on experimental conditions approximate to industrial hot breakdown rolling and carried out on the Gleeble 3800 at the Centre for Materials Engineering, University of Cape Town. Plane strain compression samples were used for this series of tests at an applied load that allow for better contact and exaggerated creep displacement. A tantalum barrier layer was used for increased conductivity and decreased risk of anvil welding. One set of tests with no in-situ hold time was performed, where the samples were quenched after deformation. The samples were retained for microstructure analysis of the micrographs from the light microscopy (LM), quantitative data and images from scanning electron microscopy (SEM) and energy dispersive X-ray (EDX), sub-grain size and grain structure from electron back scattered diffraction (EBSD).

Keywords: aluminium alloy, can-body stock, hot rolling, creep response, Zener-Hollomon parameter

Procedia PDF Downloads 86
212 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 167
211 Computation of Residual Stresses in Human Face Due to Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.

Keywords: growth, soft tissue, residual stress, finite element method

Procedia PDF Downloads 354
210 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications

Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie

Abstract:

In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.

Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer

Procedia PDF Downloads 150
209 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.

Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods

Procedia PDF Downloads 31
208 Application of the Best Technique for Estimating the Rest-Activity Rhythm Period in Shift Workers

Authors: Rakesh Kumar Soni

Abstract:

Under free living conditions, human biological clocks show a periodicity of 24 hour for numerous physiological, behavioral and biochemical variables. However, this period is not the original period; rather it merely exhibits synchronization with the solar clock. It is, therefore, most important to investigate characteristics of human circadian clock, essentially in shift workers, who normally confront with contrasting social clocks. Aim of the present study was to investigate rest-activity rhythm and to vouch for the best technique for the computation of periods in this rhythm in subjects randomly selected from different groups of shift workers. The rest-activity rhythm was studied in forty-eight shift workers from three different organizations, namely Newspaper Printing Press (NPP), Chhattisgarh State Electricity Board (CSEB) and Raipur Alloys (RA). Shift workers of NPP (N = 20) were working on a permanent night shift schedule (NS; 20:00-04:00). However, in CSEB (N = 14) and RA (N = 14), shift workers were working in a 3-shift system comprising of rotations from night (NS; 22:00-06:00) to afternoon (AS; 14:00-22:00) and to morning shift (MS; 06:00-14:00). Each subject wore an Actiwatch (AW64, Mini Mitter Co. Inc., USA) for 7 and/or 21 consecutive days, only after furnishing a certificate of consent. One-minute epoch length was chosen for the collection of wrist activity data. Period was determined by using Actiware sleep software (Periodogram), Lomb-Scargle Periodogram (LSP) and Spectral analysis software (Spectre). Other statistical techniques, such as ANOVA and Duncan’s multiple-range test were also used whenever required. A statistically significant circadian rhythm in rest-activity, gauged by cosinor, was documented in all shift workers, irrespective of shift work. Results indicate that the efficiency of the technique to determine the period (τ) depended upon the clipping limits of the τs. It appears that the technique of spectre is more reliable.

Keywords: biological clock, rest activity rhythm, spectre, periodogram

Procedia PDF Downloads 163
207 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 361
206 Quantitative Analysis of Nutrient Inflow from River and Groundwater to Imazu Bay in Fukuoka, Japan

Authors: Keisuke Konishi, Yoshinari Hiroshiro, Kento Terashima, Atsushi Tsutsumi

Abstract:

Imazu Bay plays an important role for endangered species such as horseshoe crabs and black-faced spoonbills that stay in the bay for spawning or the passing of winter. However, this bay is semi-enclosed with slow water exchange, which could lead to eutrophication under the condition of excess nutrient inflow to the bay. Therefore, quantification of nutrient inflow is of great importance. Generally, analysis of nutrient inflow to the bays takes into consideration nutrient inflow from only the river, but that from groundwater should not be ignored for more accurate results. The main objective of this study is to estimate the amounts of nutrient inflow from river and groundwater to Imazu Bay by analyzing water budget in Zuibaiji River Basin and loads of T-N, T-P, NO3-N and NH4-N. The water budget computation in the basin is performed using groundwater recharge model and quasi three-dimensional two-phase groundwater flow model, and the multiplication of the measured amount of nutrient inflow with the computed discharge gives the total amount of nutrient inflow to the bay. In addition, in order to evaluate nutrient inflow to the bay, the result is compared with nutrient inflow from geologically similar river basins. The result shows that the discharge is 3.50×107 m3/year from the river and 1.04×107 m3/year from groundwater. The submarine groundwater discharge accounts for approximately 23 % of the total discharge, which is large compared to the other river basins. It is also revealed that the total nutrient inflow is not particularly large. The sum of NO3-N and NH4-N loadings from groundwater is less than 10 % of that from the river because of denitrification in groundwater. The Shin Seibu Sewage Treatment Plant located below the observation points discharges treated water of 15,400 m3/day and plans to increase it. However, the loads of T-N and T-P from the treatment plant are 3.9 mg/L and 0.19 mg/L, so that it does not contribute a lot to eutrophication.

Keywords: Eutrophication, groundwater recharge model, nutrient inflow, quasi three-dimensional two-phase groundwater flow model, submarine groundwater discharge

Procedia PDF Downloads 454
205 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.

Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods

Procedia PDF Downloads 43
204 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes

Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu

Abstract:

Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.

Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment

Procedia PDF Downloads 194
203 The Impact of Passive Design Factors on House Energy Efficiency for New Cities in Egypt

Authors: Mahmoud Mourad, Ahmad Hamza H. Ali, S.Ookawara, Ali Kamel Abdel-Rahman, Nady M. Abdelkariem

Abstract:

The energy consumption of a house can be affected simultaneously by many building design factors related to its main architectural features, building elements and materials. This study focuses on the impact of passive design factors on the annual energy consumption of a suggested prototype house for single-family detached houses of 240 m2 in two floors, each floor of 120 m2 in new Egyptian cities located in (Alexandria - Cairo - Siwa - Assuit – Aswan) which resemble five different climatic zones (Northern coast – Northern upper Egypt - dessert region- Southern upper Egypt – South Egypt) respectively. This study present the effect of the passive design factors affecting the building energy consumption as building orientation, building material (walls, roof and slabs), building type (residential, educational, commercial), building occupancy (type of occupant, no. of occupant, age), building landscape and site selection, building envelope and fenestration (glazing material, shading), and building plan form. This information can be used to estimate the approximate saving in energy consumption, which would result on a change in the design datum for the future houses development, and to identify the major design problems for energy efficiency. To achieve the above objective, this paper presents a study for the factors affecting on the building energy consumption in the hot arid area in new Egyptian cities in five different climatic zones , followed by defining the energy needs for different utilization in this suggested prototype house. Consequently, a detailed analysis of the available Renewable Energy utilizations technologies used in the suggested home, and a calculation of the energy as a function of yearly distribution that required for this home will presented. The results obtained from building annual energy analyses show that architecture passive design factors saves about 35% of the annual energy consumption. It shows also passive cooling techniques saves about 45%, and renewable energy systems saves about 40% of the annual energy needs for this proposed home depending on the cities location on the climatic zones.

Keywords: architecture passive design factors, energy efficient homes, Egypt new cites, renewable energy technologies

Procedia PDF Downloads 401
202 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: authentication, key-session, security, wireless sensors

Procedia PDF Downloads 318
201 Economical Transformer Selection Implementing Service Lifetime Cost

Authors: Bonginkosi A. Thango, Jacobus A. Jordaan, Agha F. Nnachi

Abstract:

In this day and age, there is a proliferate concern from all governments across the globe to barricade the environment from greenhouse gases, which absorb infrared radiation. As a result, solar photovoltaic (PV) electricity has been an expeditiously growing renewable energy source and will eventually undertake a prominent role in the global energy generation. The selection and purchasing of energy-efficient transformers that meet the operational requirements of the solar photovoltaic energy generation plants then become a part of the Independent Power Producers (IPP’s) investment plan of action. Taking these into account, this paper proposes a procedure that put into effect the intricate financial analysis necessitated to precisely evaluate the transformer service lifetime no-load and load loss factors. This procedure correctly set forth the transformer service lifetime loss factors as a result of a solar PV plant’s sporadic generation profile and related levelized costs of electricity into the computation of the transformer’s total ownership cost. The results are then critically compared with the conventional transformer total ownership cost unaccompanied by the emission costs, and demonstrate the significance of the sporadic energy generation nature of the solar PV plant on the total ownership cost. The findings indicate that the latter play a crucial role for developers and Independent Power Producers (IPP’s) in making the purchase decision during a tender bid where competing offers from different transformer manufactures are evaluated. Additionally, the susceptibility analysis of different factors engrossed in the transformer service lifetime cost is carried out; factors including the levelized cost of electricity, solar PV plant’s generation modes, and the loading profile are examined.

Keywords: solar photovoltaic plant, transformer, total ownership cost, loss factors

Procedia PDF Downloads 130
200 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa

Authors: Yegnanew A. Shiferaw

Abstract:

Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.

Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility

Procedia PDF Downloads 202
199 Direct Approach in Modeling Particle Breakage Using Discrete Element Method

Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang

Abstract:

Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.

Keywords: particle bed, breakage models, breakage kinetic, discrete element method

Procedia PDF Downloads 199
198 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection

Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer

Abstract:

Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.

Keywords: HLA, phenotype, tuberculosis, Uganda

Procedia PDF Downloads 403
197 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma

Authors: Xiaoyuan Chen

Abstract:

Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.

Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma

Procedia PDF Downloads 75
196 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy

Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis

Abstract:

Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.

Keywords: currency hedging, futures, geometric Brownian motion, options

Procedia PDF Downloads 130
195 Comparison of Modulus from Repeated Plate Load Test and Resonant Column Test for Compaction Control of Trackbed Foundation

Authors: JinWoog Lee, SeongHyeok Lee, ChanYong Choi, Yujin Lim, Hojin Cho

Abstract:

Primary function of the trackbed in a conventional railway track system is to decrease the stresses in the subgrade to be in an acceptable level. A properly designed trackbed layer performs this task adequately. Many design procedures have used assumed and/or are based on critical stiffness values of the layers obtained mostly in the field to calculate an appropriate thickness of the sublayers of the trackbed foundation. However, those stiffness values do not consider strain levels clearly and precisely in the layers. This study proposes a method of computation of stiffness that can handle with strain level in the layers of the trackbed foundation in order to provide properly selected design values of the stiffness of the layers. The shear modulus values are dependent on shear strain level so that the strain levels generated in the subgrade in the trackbed under wheel loading and below plate of Repeated Plate Bearing Test (RPBT) are investigated by finite element analysis program ABAQUS and PLAXIS programs. The strain levels generated in the subgrade from RPBT are compared to those values from RC (Resonant Column) test after some consideration of strain levels and stress consideration. For comparison of shear modulus G obtained from RC test and stiffness moduli Ev2 obtained from RPBT in the field, many numbers of mid-size RC tests in laboratory and RPBT in field were performed extensively. It was found in this study that there is a big difference in stiffness modulus when the converted Ev2 values were compared to those values of RC test. It is verified in this study that it is necessary to use precise and increased loading steps to construct nonlinear curves from RPBT in order to get correct Ev2 values in proper strain levels.

Keywords: modulus, plate load test, resonant column test, trackbed foundation

Procedia PDF Downloads 495