Search results for: rational ball curve
285 Urea and Starch Detection on a Paper-Based Microfluidic Device Enabled on a Smartphone
Authors: Shashank Kumar, Mansi Chandra, Ujjawal Singh, Parth Gupta, Rishi Ram, Arnab Sarkar
Abstract:
Milk is one of the basic and primary sources of food and energy as we start consuming milk from birth. Hence, milk quality and purity and checking the concentration of its constituents become necessary steps. Considering the importance of the purity of milk for human health, the following study has been carried out to simultaneously detect and quantify the different adulterants like urea and starch in milk with the help of a paper-based microfluidic device integrated with a smartphone. The detection of the concentration of urea and starch is based on the principle of colorimetry. In contrast, the fluid flow in the device is based on the capillary action of porous media. The microfluidic channel proposed in the study is equipped with a specialized detection zone, and it employs a colorimetric indicator undergoing a visible color change when the milk gets in touch or reacts with a set of reagents which confirms the presence of different adulterants in the milk. In our proposed work, we have used iodine to detect the percentage of starch in the milk, whereas, in the case of urea, we have used the p-DMAB. A direct correlation has been found between the color change intensity and the concentration of adulterants. A calibration curve was constructed to find color intensity and subsequent starch and urea concentration. The device has low-cost production and easy disposability, which make it highly suitable for widespread adoption, especially in resource-constrained settings. Moreover, a smartphone application has been developed to detect, capture, and analyze the change in color intensity due to the presence of adulterants in the milk. The low-cost nature of the smartphone-integrated paper-based sensor, coupled with its integration with smartphones, makes it an attractive solution for widespread use. They are affordable, simple to use, and do not require specialized training, making them ideal tools for regulatory bodies and concerned consumers.Keywords: paper based microfluidic device, milk adulteration, urea detection, starch detection, smartphone application
Procedia PDF Downloads 65284 A Content Analysis of the Introduction to the Philosophy of Religion Literature Published in the West between 1950-2010 in Terms of Definition, Method and Subjects
Authors: Fatih Topaloğlu
Abstract:
Although philosophy is inherently a theoretical and intellectual activity, it should not be denied that environmental conditions influence the formation and shaping of philosophical thought. In this context, it should be noted that the Philosophy of Religion has been influential in the debates in the West, especially since the beginning of the 20th century, and that this influence has dimensions that cannot be limited to academic or intellectual fields. The issues and problems that fall within the field of interest of Philosophy of Religion are followed with interest by a significant proportion of society through popular publications. Philosophy of Religion has its share in many social, economic, cultural, scientific, political and ethical developments. Philosophy of Religion, in the most general sense, can be defined as a philosophical approach to religion or a philosophical way of thinking and discussing religion. Philosophy of Religion tries to explain the epistemological foundations of concepts such as belief and faith that shape religious life by revealing their meaning for the individual. Thus, Philosophy of Religion tries to evaluate the effect of beliefs on the individual's values, judgments and behaviours with a comprehensive and critical eye. The Philosophy of Religion, which tries to create new solutions and perspectives by applying the methods of philosophy to religious problems, tries to solve these problems not by referring to the holy book or religious teachings but by logical proofs obtained through the possibilities of reason and evidence filtered through the filter of criticism. Although there is no standard method for doing Philosophy of Religion, it can be said that an approach that can be expressed as thinking about religion in a rational, objective, and consistent way is generally accepted. The evaluations made within the scope of Philosophy of Religion have two stages. The first is the definition stage, and the second is the evaluation stage. In the first stage, the data of different scientific disciplines, especially other religious sciences, are utilized to define the issues objectively. In the second stage, philosophical evaluations are made based on this foundation. During these evaluations, the issue of how the relationship between religion and philosophy should be established is extremely sensitive. The main thesis of this paper is that the Philosophy of Religion, as a branch of philosophy, has been affected by the conditions caused by the historical experience through which it has passed and has differentiated its subjects and the methods it uses to realize its philosophical acts over time under the influence of these conditions. This study will attempt to evaluate the validity of this study based on the "Introduction to Philosophy of Religion" literature, which we assume reflects this differentiation. As a result of this examination will aim to reach some factual conclusions about the nature of both philosophical and religious thought, to determine the phases that the Philosophy of Religion as a discipline has gone through since the day it emerged, and to investigate the possibilities of a holistic view of the field.Keywords: content analysis, culture, history, philosophy of religion, method
Procedia PDF Downloads 57283 Shear Strength Parameters of an Unsaturated Lateritic Soil
Authors: Jeferson Brito Fernades, Breno Padovezi Rocha, Roger Augusto Rodrigues, Heraldo Luiz Giacheti
Abstract:
The geotechnical projects demand the appropriate knowledge of soil characteristics and parameters. The determination of geotechnical soil parameters can be done by means of laboratory or in situ tests. In countries with tropical weather, like Brazil, unsaturated soils are very usual. In these soils, the soil suction has been recognized as an important stress state variable, which commands the geo-mechanical behavior. Triaxial and direct shear tests on saturated soils samples allow determine only the minimal soil shear strength, in other words, no suction contribution. This paper briefly describes the triaxial test with controlled suction as well as discusses the influence of suction on the shear strength parameters of a lateritic tropical sandy soil from a Brazilian research site. In this site, a sample pit was excavated to retrieve disturbed and undisturbed soil blocks. The samples extracted from these blocks were tested in laboratory to represent the soil from 1.5, 3.0 and 5.0 m depth. The stress curves and shear strength envelopes determined by triaxial tests varying suction and confining pressure are presented and discussed. The water retention characteristics on this soil complement this analysis. In situ CPT tests were also carried out at this site in different seasons of the year. In this case, the soil suction profile was determined by means of the soil water retention. This extra information allowed assessing how soil suction also affected the CPT data and the shear strength parameters estimative via correlation. The major conclusions of this paper are: the undisturbed soil samples contracted before shearing and the soil shear strength increased hyperbolically with suction; and it was possible to assess how soil suction also influenced CPT test data based on the water content soil profile as well as the water retention curve. This study contributed with a better understanding of the shear strength parameters and the soil variability of a typical unsaturated tropical soil.Keywords: site characterization, triaxial test, CPT, suction, variability
Procedia PDF Downloads 416282 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer
Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek
Abstract:
Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying
Procedia PDF Downloads 126281 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 55280 Comparison of Stereotactic Body Radiation Therapy Virtual Treatment Plans Obtained With Different Collimators in the Cyberknife System in Partial Breast Irradiation: A Retrospective Study
Authors: Öznur Saribaş, Si̇bel Kahraman Çeti̇ntaş
Abstract:
It is aimed to compare target volume and critical organ doses by using CyberKnife (CK) in accelerated partial breast irradiation (APBI) in patients with early stage breast cancer. Three different virtual plans were made for Iris, fixed and multi-leaf collimator (MLC) for 5 patients who received radiotherapy in the CyberKnife system. CyberKnife virtual plans were created, with 6 Gy per day totaling 30 Gy. Dosimetric parameters for the three collimators were analyzed according to the restrictions in the NSABP-39/RTOG 0413 protocol. The plans ensured critical organs were protected and GTV received 95 % of the prescribed dose. The prescribed dose was defined by the isodose curve of a minimum of 80. Homogeneity index (HI), conformity index (CI), treatment time (min), monitor unit (MU) and doses taken by critical organs were compared. As a result of the comparison of the plans, a significant difference was found for the duration of treatment, MU. However, no significant difference was found for HI, CI. V30 and V15 values of the ipsi-lateral breast were found in the lowest MLC. There was no significant difference between Dmax values for lung and heart. However, the mean MU and duration of treatment were found in the lowest MLC. As a result, the target volume received the desired dose in each collimator. The contralateral breast and contralateral lung doses were the lowest in the Iris. Fixed collimator was found to be more suitable for cardiac doses. But these values did not make a significant difference. The use of fixed collimators may cause difficulties in clinical applications due to the long treatment time. The choice of collimator in breast SBRT applications with CyberKnife may vary depending on tumor size, proximity to critical organs and tumor localization.Keywords: APBI, CyberKnife, early stage breast cancer, radiotherapy.
Procedia PDF Downloads 117279 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India
Authors: Zafar Beg, Kumar Gaurav
Abstract:
The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River
Procedia PDF Downloads 179278 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 512277 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar
Authors: Panarat Saengpanya
Abstract:
There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.Keywords: floating house, buoy, floating structure, EPS
Procedia PDF Downloads 146276 Experimental Investigation on the Effect of Cross Flow on Discharge Coefficient of an Orifice
Authors: Mathew Saxon A, Aneeh Rajan, Sajeev P
Abstract:
Many fluid flow applications employ different types of orifices to control the flow rate or to reduce the pressure. Discharge coefficients generally vary from 0.6 to 0.95 depending on the type of the orifice. The tabulated value of discharge coefficients of various types of orifices available can be used in most common applications. The upstream and downstream flow condition of an orifice is hardly considered while choosing the discharge coefficient of an orifice. But literature shows that the discharge coefficient can be affected by the presence of cross flow. Cross flow is defined as the condition wherein; a fluid is injected nearly perpendicular to a flowing fluid. Most researchers have worked on water being injected into a cross-flow of water. The present work deals with water to gas systems in which water is injected in a normal direction into a flowing stream of gas. The test article used in the current work is called thermal regulator, which is used in a liquid rocket engine to reduce the temperature of hot gas tapped from the gas generator by injecting water into the hot gas so that a cooler gas can be supplied to the turbine. In a thermal regulator, water is injected through an orifice in a normal direction into the hot gas stream. But the injection orifice had been calibrated under backpressure by maintaining a stagnant gas medium at the downstream. The motivation of the present study aroused due to the observation of a lower Cd of the orifice in flight compared to the calibrated Cd. A systematic experimental investigation is carried out in this paper to study the effect of cross-flow on the discharge coefficient of an orifice in water to a gas system. The study reveals that there is an appreciable reduction in the discharge coefficient with cross flow compared to that without cross flow. It is found that the discharge coefficient greatly depends on the ratio of momentum of water injected to the momentum of the gas cross flow. The effective discharge coefficient of different orifices was normalized using the discharge coefficient without cross-flow and it is observed that normalized curves of effective discharge coefficient of different orifices with momentum ratio collapsing into a single curve. Further, an equation is formulated using the test data to predict the effective discharge coefficient with cross flow using the calibrated Cd value without cross flow.Keywords: cross flow, discharge coefficient, orifice, momentum ratio
Procedia PDF Downloads 142275 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures
Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý
Abstract:
Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.Keywords: electron diffraction spectroscopy, high strength concrete, interfacial transition zone, normal strength concrete, scanning electron microscopy
Procedia PDF Downloads 292274 An Experimental Investigation of Rehabilitation and Strengthening of Reinforced Concrete T-Beams Under Static Monotonic Increasing Loading
Authors: Salem Alsanusi, Abdulla Alakad
Abstract:
An experimental investigation to study the behaviour of under flexure reinforced concrete T-Beams. Those Beams were loaded to pre-designated stress levels as percentage of calculated collapse loads. Repairing these beans by either reinforced concrete jacket, or by externally bolted steel plates were utilized. Twelve full scale beams were tested in this experimental program scheme. Eight out of the twelve beams were loaded under different loading levels. Tests were performed for the beams before and after repair with Reinforced Concrete Jacket (RCJ). The applied Load levels were 60%, 77% and 100% of the calculated collapse loads. The remaining four beams were tested before and after repair with Bolted Steel Plate (BSP). Furthermore, out previously mentioned four beams two beams were loaded to the calculated failure load 100% and the remaining two beams were not subjected to any load. The eight beams recorded for the RCJ test were repaired using reinforced concrete jacket. The four beams recorded for the BSP test were all repaired using steel plate at the bottom. All the strengthened beams were gradually loaded until failure occurs. However, in each loading case, the beams behaviour, before and after strengthening, were studied through close inspection of the cracking propagation, and by carrying out an extensive measurement of deformations and strength. The stress-strain curve for reinforcing steel and the failure strains measured in the tests were utilized in the calculation of failure load for the beams before and after strengthening. As a result, the calculated failure loads were close to the actual failure tests in case of beams before repair, ranging from 85% to 90% and also in case of beams repaired by reinforced concrete jacket ranging from 70% to 85%. The results were in case of beams repaired by bolted steel plates ranging from (50% to 85%). It was observed that both jacketing and bolted steel plate methods could effectively restore the full flexure capacity of the damaged beams. However, the reinforced jacket has increased the failure load by about 67%, whereas the bolted steel plates recovered the failure load.Keywords: rehabilitation, strengthening, reinforced concrete, beams deflection, bending stresses
Procedia PDF Downloads 306273 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 139272 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory
Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang
Abstract:
As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation
Procedia PDF Downloads 144271 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 121270 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 216269 Microstructure of Virgin and Aged Asphalts by Small-Angle X-Ray Scattering
Authors: Dong Tang, Yongli Zhao
Abstract:
The study of the microstructure of asphalt is of great importance for the analysis of its macroscopic properties. However, the peculiarities of the chemical composition of the asphalt itself and the limitations of existing direct imaging techniques have caused researchers to face many obstacles in studying the microstructure of asphalt. The advantage of small-angle X-ray scattering (SAXS) is that it allows quantitative determination of the internal structure of opaque materials and is suitable for analyzing the microstructure of materials. Therefore, the SAXS technique was used to study the evolution of microstructures on the nanoscale during asphalt aging. And the reasons for the change in scattering contrast during asphalt aging were also explained with the help of Fourier transform infrared spectroscopy (FTIR). SAXS experimental results show that the SAXS curves of asphalt are similar to the scattering curves of scattering objects with two-level structures. The Porod curve for asphalt shows that there is no obvious interface between the micelles and the surrounding mediums, and there is only a fluctuation of the hot electron density between the two. The Beaucage model fit SAXS patterns shows that the scattering coefficient P of the asphaltene clusters as well as the size of the micelles, gradually increase with the aging of the asphalt. Furthermore, aggregation exists between the micelles of asphalt and becomes more pronounced with increasing aging. During asphalt aging, the electron density difference between the micelles and the surrounding mediums gradually increases, leading to an increase in the scattering contrast of the asphalt. Under long-term aging conditions due to the gradual transition from maltenes to asphaltenes, the electron density difference between the micelles and the surrounding mediums decreases, resulting in a decrease in the scattering contrast of asphalt SAXS. Finally, this paper correlates the macroscopic properties of asphalt with microstructural parameters, and the results show that the high-temperature rutting resistance of asphalt is enhanced and the low-temperature cracking resistance decreases due to the aggregation of micelles and the generation of new micelles. These results are useful for understanding the relationship between changes in microstructure and changes in properties during asphalt aging and provide theoretical guidance for the regeneration of aged asphalt.Keywords: asphalt, Beaucage model, microstructure, SAXS
Procedia PDF Downloads 80268 Hypertension and Obesity: A Cross-National Comparison of BMI and Waist-Height Ratio
Authors: Adam M. Yates, Julie E. Byles
Abstract:
Hypertension has been identified as a prominent co-morbidity of obesity. To improve clinical intervention of hypertension, it is critical to identify metrics that most accurately reflect risk for increased morbidity. Two of the most relevant and accurate measures for increased risk of hypertension due to excess adipose tissue are Body Mass Index (BMI) and Waist-Height Ratio (WHtR). Previous research has examined these measures in cross-national and cross-ethnic studies, but has most often relied on secondary means such as meta-analysis to identify and evaluate the efficacy of individual body mass measures. In this study, we instead use cross-sectional analysis to assess the cross-ethnic discriminative power of BMI and WHtR to predict risk of hypertension. Using the WHO SAGE survey, which collected anthropometric and biometric data from respondents in six middle-income countries (China, Ghana, India, Mexico, Russia, South Africa), we implement logistic regression to examine the discriminative power of measured BMI and WHtR with a known population of hypertensive and non-hypertensive respondents. We control for gender and age to identify whether optimum cut-off points that are adequately sensitive as tests for risk of hypertension may be different between groups. We report results for OR, RR, and ROC curves for each of the six SAGE countries. As seen in existing literature, results demonstrate that both WHtR and BMI are significant predictors of hypertension (p < .01). For these six countries, we find that cut-off points for WHtR may be dependent upon gender, age and ethnicity. While an optimum omnibus cut-point for WHtR may be 0.55, results also suggest that the gender and age relationship with WHtR may warrant the development of individual cut-offs to optimize health outcomes. Trends through multiple countries show that the optimum cut-point for WHtR increases with age while the area under the curve (AUROC) decreases for both men and women. Comparison between BMI and WHtR indicate that BMI may remain more robust than WHtR. Implications for public health policy are discussed.Keywords: hypertension, obesity, Waist-Height ratio, SAGE
Procedia PDF Downloads 478267 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 416266 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 99265 A Flexible Piezoelectric - Polymer Composite for Non-Invasive Detection of Multiple Vital Signs of Human
Authors: Sarah Pasala, Elizabeth Zacharias
Abstract:
Vital sign monitoring is crucial for both everyday health and medical diagnosis. A significant factor in assessing a human's health is their vital signs, which include heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings. Vital sign monitoring has been the focus of many system and method innovations recently. Piezoelectrics are materials that convert mechanical energy into electrical energy and can be used for vital sign monitoring. Piezoelectric energy harvesters that are stretchable and flexible can detect very low frequencies like airflow, heartbeat, etc. Current advancements in piezoelectric materials and flexible sensors have made it possible to create wearable and implantable medical devices that can continuously monitor physiological signals in humans. But because of their non-biocompatible nature, they also produce a large amount of e-waste and require another surgery to remove the implant. This paper presents a biocompatible and flexible piezoelectric composite material for wearable and implantable devices that offers a high-performance platform for seamless and continuous monitoring of human physiological signals and tactile stimuli. It also addresses the issue of e-waste and secondary surgery. A Lead-free piezoelectric, SrBi4Ti4O15, is found to be suitable for this application because the properties can be tailored by suitable substitutions and also by varying the synthesis temperature protocols. In the present work, SrBi4Ti4O15 modified by rare-earth has been synthesized and studied. Coupling factors are calculated from resonant (fr) and anti-resonant frequencies (fa). It is observed that Samarium substitution in SBT has increased the Curie temperature, dielectric and piezoelectric properties. From impedance spectroscopy studies, relaxation, and non-Debye type behaviour are observed. The composite of bioresorbable poly(l-lactide) and Lead-free rare earth modified Bismuth Layered Ferroelectrics leads to a flexible piezoelectric device for non-invasive measurement of vital signs, such as heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings and also artery pulse signals in near-surface arteries. These composites are suitable to detect slight movement of the muscles and joints. This Lead-free rare earth modified Bismuth Layered Ferroelectrics – polymer composite is synthesized using a ball mill and the solid-state double sintering method. XRD studies indicated the two phases in the composite. SEM studies revealed the grain size to be uniform and in the range of 100 nm. The electromechanical coupling factor is improved. The elastic constants are calculated and the mechanical flexibility is found to be improved as compared to the single-phase rare earth modified Bismuth Latered piezoelectric. The results indicate that this composite is suitable for the non-invasive detection of multiple vital signs of humans.Keywords: composites, flexible, non-invasive, piezoelectric
Procedia PDF Downloads 37264 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy
Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos
Abstract:
Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.Keywords: decision making, cognitive load, financial literacy, numeracy
Procedia PDF Downloads 182263 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers
Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan
Abstract:
Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality
Procedia PDF Downloads 129262 Predictive Value Modified Sick Neonatal Score (MSNS) On Critically Ill Neonates Outcome Treated in Neonatal Intensive Care Unit (NICU)
Authors: Oktavian Prasetia Wardana, Martono Tri Utomo, Risa Etika, Kartika Darma Handayani, Dina Angelika, Wurry Ayuningtyas
Abstract:
Background: Critically ill neonates are newborn babies with high-risk factors that potentially cause disability and/or death. Scoring systems for determining the severity of the disease have been widely developed as well as some designs for use in neonates. The SNAPPE-II method, which has been used as a mortality predictor scoring system in several referral centers, was found to be slow in assessing the outcome of critically ill neonates in the Neonatal Intensive Care Unit (NICU). Objective: To analyze the predictive value of MSNS on the outcome of critically ill neonates at the time of arrival up to 24 hours after being admitted to the NICU. Methods: A longitudinal observational analytic study based on medical record data was conducted from January to August 2022. Each sample was recorded from medical record data, including data on gestational age, mode of delivery, APGAR score at birth, resuscitation measures at birth, duration of resuscitation, post-resuscitation ventilation, physical examination at birth (including vital signs and any congenital abnormalities), the results of routine laboratory examinations, as well as the neonatal outcomes. Results: This study involved 105 critically ill neonates who were admitted to the NICU. The outcome of critically ill neonates was 50 (47.6%) neonates died, and 55 (52.4%) neonates lived. There were more males than females (61% vs. 39%). The mean gestational age of the subjects in this study was 33.8 ± 4.28 weeks, with the mean birth weight of the subjects being 1820.31 ± 33.18 g. The mean MSNS score of neonates with a deadly outcome was lower than that of the lived outcome. ROC curve with a cut point MSNS score <10.5 obtained an AUC of 93.5% (95% CI: 88.3-98.6) with a sensitivity value of 84% (95% CI: 80.5-94.9), specificity 80 % (CI 95%: 88.3-98.6), Positive Predictive Value (PPV) 79.2%, Negative Predictive Value (NPV) 84.6%, Risk Ratio (RR) 5.14 with Hosmer & Lemeshow test results p>0.05. Conclusion: The MSNS score has a good predictive value and good calibration of the outcomes of critically ill neonates admitted to the NICU.Keywords: critically ill neonate, outcome, MSNS, NICU, predictive value
Procedia PDF Downloads 69261 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation
Abstract:
Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.Keywords: gene expression, heart damage, proton irradiation, radiotherapy
Procedia PDF Downloads 489260 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 148259 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 199258 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution
Authors: Yasmin Begum Mohammed
Abstract:
Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique
Procedia PDF Downloads 295257 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool
Authors: Sousan Arafeh
Abstract:
Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development
Procedia PDF Downloads 116256 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 302