Search results for: classical physics
135 Greek Tragedy on the American Stage until the First Half of 20ᵗʰ: Identities and Intersections between Greek, Italian and Jewish Community Theatre
Authors: Papazafeiropoulou Olga
Abstract:
The purpose of this paper focuses on exploring the emergence of Greek tragedy on the American stage until the first half of the 20th century through the intellectual processes and contributions of Greek, Italian and Jewish community theatre. Drawing on a wide range of sources, we trace Greek tragedy on the American stage, exploring the intricate processes of community’s theatre identities. The announcement aims to analyze the distinct yet related efforts of first Americans to intersect with Greek tragedy, searching simultaneously for the identities of immigrants. Ultimately, ancient drama became a vehicle not only for great developments in the American theater. In 1903, the Greek actor Dionysios Taboularis arrived in America, while the immigrant stream from Greece to America brought his artistic heritage, presenting in “Hall House” of Chicago the play Return. In 1906, in New York, an amateur group presented the play The Alosi of Messolonghi, and the next year in Chicago, an attempt was noted with a dramatic romance. In the decade 1907-1917, Nikolaos Matsoukas founded and directed the “Arbe theater”, while Petros Kotopoulis formed a troupe. In 1930, one of the greatest Greek theatrical events was the arrival of Marika’s Kotopoulis. Also, members of Vrysoula’s Pantopoulos formed the “Athenian Operetta”, with a positive influence on Greek American theatre. Italian immigrant community, located in tenement “Little Italies” throughout the city, and soon amateur theatrical clubs evolved. The earliest was the “Circolo Filodrammatico Italo-Americano” in 1880. Fausto Malzone’s artistic direction paved the way for the professional Italian immigrant theatre. Immigrant audiences heard the plays of their homeland, representing a major transition for this ethnic theatre. In 1900, the community had produced the major forces that created the professional theatre. By l905, the Italian American theatre had become firmly rooted in its professional phase. Yiddish Theater was both an import and a home-grown phenomenon. In 1878, The Sorceress was brought to America by Boris Thomashefsky. Between 1890 and 1940, many Yiddish theater companies appeared in America, presenting adaptations of classical plays. Αmerica’s people's first encounter with ancient texts was mostly academic. The tracing of tragedy as a form and concept that follows the evolutionary course of domestic social, aesthetic, and political ferments according to the international trends and currents draws conclusions about the early Greek, Italian, and Jewish immigrant’s theatre in relationship to the American scene until the first half of 20th century. Presumably, community theater acquired identity by intersecting with the spiritual reception of tragedy in America.Keywords: American, community, Greek, Italian, identities, intersection, Jewish, theatre, tragedy
Procedia PDF Downloads 73134 Building Exoskeletons for Seismic Retrofitting
Authors: Giuliana Scuderi, Patrick Teuffel
Abstract:
The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting
Procedia PDF Downloads 420133 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition
Authors: Tamir Michal
Abstract:
The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development
Procedia PDF Downloads 350132 Comparative Chromatographic Profiling of Wild and Cultivated Macrocybe Gigantea (Massee) Pegler & Lodge
Authors: Gagan Brar, Munruchi Kaur
Abstract:
Macrocybe gigantea was collected from the wild, growing as pure white, fleshy, robust fruit bodies in caespitose clusters. Initially, the few ladies collecting these fruiting bodies for cooking revealed their edibility status, which was later confirmed through classical and molecular taxonomy. The culture of this potential wild edible taxa was raised with an aim of domesticating it. Various solid and liquid media were evaluated for their vegetative growth, in which Malt Extract Agar was found to be the best solid medium and Glucose Peptone medium as the best liquid medium. The effect of different temperatures as well as pH was also evaluated for the vegetative growth of M. gigantea, and it was found that it shows maximum vegetative growth at 30° and pH 5. For spawn preparation, various grains viz. Wheat grains, Jowar grains, Bajra grains and Maize grains were evaluated, and it was found that wheat grains boiled for 30 minutes gave the maximum mycelial growth. Mother spawn was thus prepared on wheat grains boiled for 30 minutes. For raising the fruiting bodies, different locally available agro-wastes were tried, and it was found that paddy straw gives the best growth. Both wilds as well as cultivated M. gigantea were compared through HPLC to evaluate the different nutritional and nutraceutical values. For the evaluation of different sugars in wild and cultivated M. gigantea, 15 sugars were taken for analysis. Among these Melezitose, Trehalose, Glucose, Xylose and Mannitol were found in the wild collection of M. gigantea; in the cultivated sample, Melezitose, Trehalose, Xylose and Dulcitol were detected. Among the 20 different amino acids, 18 amino acids were found, except Asparagine and Glutamine in both wild as well as cultivated samples. Among the 37 tested fatty acids, only 6 fatty acids, namely Palmitic acid, Stearic acid, Cis-9 Oleic acid, Linoleic acid, Gamma-Linolenic acid and Tricosanoic acid, were found in both wild and cultivated samples, although the concentration of these fatty acids was more in the cultivated sample. From the various vitamins tested, Vitamin C, D and E were present in both wild and cultivated samples. Both wild as well as cultivated samples were evaluated for the presence of phenols; for this purpose, eleven phenols were taken as standards in HPLC analysis, and it was found that Gallic acid, Resorcinol, Ferulic acid and Pyrogallol were present in the wild mushroom sample whereas in the cultivated sample Ferulic acid, Caffeic Acid, Vanillic acid and Vanillin are present. The flavonoid analysis revealed the presence of Rutin, Naringin and Quercetin in wild M. gigantea, while 5 Naringin, Catechol, Myrecetin, Gossypin and Quercetin were found in cultivated one. From the comparative chromatographic profiling of both wild as well as cultivated M. gigantea, it is concluded that no nutrient loss was found during its cultivation. An increase in percentage of secondary metabolites (i.e., phenols and flavonoids) was found in cultivated one as compared to wild M. gigantea. Thus, from future perspective cultivated species of M. gigantea can be recommended for the commercial purpose as a good food supplement.Keywords: culture, edible, fruit bodies, wild
Procedia PDF Downloads 72131 The Touch Sensation: Ageing and Gender Influences
Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani
Abstract:
A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.Keywords: ageing, finger, gender, touch
Procedia PDF Downloads 265130 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain
Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez
Abstract:
The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.Keywords: monitoring, rock slope stability, water drawdown, weak rock
Procedia PDF Downloads 160129 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 150128 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming
Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute
Abstract:
Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon
Procedia PDF Downloads 229127 Tracking Patient Pathway for Assessing Public Health and Financial Burden to Community for Pulmonary Tuberculosis: Pointer from Central India
Authors: Ashish Sinha, Pushpend Agrawal
Abstract:
Background: Patients with undiagnosed pulmonary TB predominantly act as reservoirs for its transmission through 10-15 secondary infections in the next 1-5 Yrs. Delays in the diagnosis and treatment may worsen the disease with increase the risk of death. Factors responsible for such delays by tracking patient pathways to treatment may help in planning better interventions. The provision of ‘free diagnosis and treatment’ forms the cornerstone of the National Tuberculosis Elimination Programme (NTEP). OOPE is defined as the money spent by the patient during TB care other than public health facilities. Free TB care at all health facilities could reduce out-of-pocket expenses to the minimum possible levels. Material and Methods: This cross-sectional study was conducted among randomly selected 252 TB patients from Nov – Oct 2022 by taking in-depth interviews following informed verbal consent. We documented their journey from initial symptoms until they reached the public health facility, along with their ‘out-of-pocket expenditure’ (OOPE) pertaining to TB care. Results: Total treatment delay was 91±72 days on average (median: 77days, IQR: 45-104 days), while the isolated patient delay was 31±45 days (median: 15 days, IQR: 0 days to 43 days); diagnostic delay; 57±60 days (median: 42days, IQR 14-78 days), treatment delay 19 ± 18 days (median: 15days, IQR: 11-19 days). A patient delay (> 30 days) was significantly associated with ignorance about classic symptoms of pulmonary TB, adoption of self-medication, illiteracy, and middle and lower social class. Diagnostic delay was significantly higher among those who contacted private health facilities, were unaware of signs and symptoms, had >2 consultations, and not getting an appropriate referral for TB care. Most (97%) of the study participants interviewed claimed to have incurred some expenditure.Median total expenses were 6155(IQR: 2625-15175) rupees. More than half 141 (56%) of the study participants had expenses >5000 rupees. Median transport expenses were 525(IQR: 200-1012) rupees; Median consultation expenses were 700(IQR: 200-1600) rupees; Median investigation expenses were 1000(IQR: 0-3025) rupees and the Median medicine expenses were 3350(IQR: 1300-7525).OOPE for consultation, investigation, and medicine was observed to be significantly higher among patients who ignored classical signs& symptoms of TB, repeated visits to private health facilities, and due to self-medication practices. Transport expenses and delays in seeking care at facilities were observed to have an upward trend with OOP Expenses (r =1). Conclusion: Delay in TB care due to low awareness about signs and symptoms of TB and poor seeking care, lack of proper consultation, and appropriate referrals reported by the study subjects indicate the areas which need proper attention by the program managers. Despite a centrally sponsored programme, the financial burden on TB patients is still in the unacceptable range. OOPE could be reduced as low as possible by addressing the responsible factors linked to it.Keywords: patient pathway, delay, pulmonary tuberculosis, out of pocket expenses
Procedia PDF Downloads 65126 The Relevance of Personality Traits and Networking in New Ventures’ Success
Authors: Caterina Muzzi, Sergio Albertini, Davide Giacomini
Abstract:
The research is aimed to investigate the role of young entrepreneurs’ personality traits and their contextual background on the success of entrepreneurial initiatives. In the literature, the debate is still open about the main drivers in predicting entrepreneurial success. Classical theories are focused on looking at specific personality traits that could lead to successful start-ups initiatives, while emerging approaches are more interested in young entrepreneurs’ contextual background (such as the family of origin, the previous experience and their professional network). An online survey was submitted to the participants of an entrepreneurial training initiative organised by the Italian Young Entrepreneurs Association (Confindustria) in Brescia headquarter (AIB). At the time the authors started data collection for this research, the third edition of the initiative was just concluded and involved a total amount of 37 young future entrepreneurs. In the literature General self-efficacy (GSE) and, more specifically, entrepreneurial self-efficacy (ESE) have often been associated to positive performances, as they allow future entrepreneurs to effectively cope with entrepreneurial activities, both at an early stage and in new venture management. In a counter-intuitive manner, optimism is not always associated with entrepreneurial positive results. Too optimistic people risk taking hazardous risks and some authors suggest that moderately optimistic entrepreneurs achieve more positive results than over-optimistic ones. Indeed highly optimistic individuals often hold unrealistic expectations, discount negative information, and mentally reconstruct experiences so as to avoid contradictions The importance of context has been increasingly considered in entrepreneurship literature and its role strongly emerges starting from the earliest entrepreneurial stage and it is crucial to transform the “intention of entrepreneurship” into the actual start-up. Furthermore, coherently with the “network approach to entrepreneurship”, context embeddedness allow future entrepreneurs to leverage relationships built through previous experiences and/or thanks to the fact of belonging to families of entrepreneurs. For the purpose of this research, entrepreneurial success was measured by the fact of having or not founded a new venture after the training initiative. In this research, the authors measured GSE, ESE and optimism using already tested items that showed to be reliable also in this case. They collected 36 completed questionnaires. The t-test for independent samples run to measure significant differences in means between those that already funded the new venture and those that did not. No significant differences emerged with respect to all the tested personality traits, but a logistic regression analysis, run with contextual variables as independent ones, showed that personal and professional networking, made both before and during the master, is the most relevant variable in determining new venture success. These findings shed more light on the process of new venture foundation and could encourage national and local policy makers to invest on networking as one of the main drivers that could support the creation of new ventures.Keywords: entrepreneurship, networking, new ventures, personality traits
Procedia PDF Downloads 144125 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134124 The Ballistics Case Study of the Enrica Lexie Incident
Authors: Diego Abbo
Abstract:
On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.Keywords: impact physics, intermediate ballistics, terminal ballistics, tumbling effect
Procedia PDF Downloads 178123 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study
Authors: Diaa Ahmed Mohamed Ahmedien
Abstract:
Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.Keywords: Egyptian community, holographic art, laser art, visual art
Procedia PDF Downloads 479122 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 91121 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 66120 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique
Authors: Igor V. Savchenko, Dmitrii A. Samoshkin
Abstract:
The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity
Procedia PDF Downloads 198119 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil
Authors: Thiago R. Pereira
Abstract:
The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary
Procedia PDF Downloads 350118 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions
Authors: Saif Alomari
Abstract:
The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters
Procedia PDF Downloads 142117 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 387116 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 116115 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines
Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka
Abstract:
To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps
Procedia PDF Downloads 151114 Motivation and Multiglossia: Exploring the Diversity of Interests, Attitudes, and Engagement of Arabic Learners
Authors: Anna-Maria Ramezanzadeh
Abstract:
Demand for Arabic language is growing worldwide, driven by increased interest in the multifarious purposes the language serves, both for the population of heritage learners and those studying Arabic as a foreign language. The diglossic, or indeed multiglossic nature of the language as used in Arabic speaking communities however, is seldom represented in the content of classroom courses. This disjoint between the nature of provision and students’ expectations can severely impact their engagement with course material, and their motivation to either commence or continue learning the language. The nature of motivation and its relationship to multiglossia is sparsely explored in current literature on Arabic. The theoretical framework here proposed aims to address this gap by presenting a model and instruments for the measurement of Arabic learners’ motivation in relation to the multiple strands of the language. It adopts and develops the Second Language Motivation Self-System model (L2MSS), originally proposed by Zoltan Dörnyei, which measures motivation as the desire to reduce the discrepancy between leaners’ current and future self-concepts in terms of the second language (L2). The tripartite structure incorporates measures of the Current L2 Self, Future L2 Self (consisting of an Ideal L2 Self, and an Ought-To Self), and the L2 Learning Experience. The strength of the self-concepts is measured across three different domains of Arabic: Classical, Modern Standard and Colloquial. The focus on learners’ self-concepts allows for an exploration of the effect of multiple factors on motivation towards Arabic, including religion. The relationship between Islam and Arabic is often given as a prominent reason behind some students’ desire to learn the language. Exactly how and why this factor features in learners’ L2 self-concepts has not yet been explored. Specifically designed surveys and interview protocols are proposed to facilitate the exploration of these constructs. The L2 Learning Experience component of the model is operationalized as learners’ task-based engagement. Engagement is conceptualised as multi-dimensional and malleable. In this model, situation-specific measures of cognitive, behavioural, and affective components of engagement are collected via specially designed repeated post-task self-report surveys on Personal Digital Assistant over multiple Arabic lessons. Tasks are categorised according to language learning skill. Given the domain-specific uses of the different varieties of Arabic, the relationship between learners’ engagement with different types of tasks and their overall motivational profiles will be examined to determine the extent of the interaction between the two constructs. A framework for this data analysis is proposed and hypotheses discussed. The unique combination of situation-specific measures of engagement and a person-oriented approach to measuring motivation allows for a macro- and micro-analysis of the interaction between learners and the Arabic learning process. By combining cross-sectional and longitudinal elements with a mixed-methods design, the model proposed offers the potential for capturing a comprehensive and detailed picture of the motivation and engagement of Arabic learners. The application of this framework offers a number of numerous potential pedagogical and research implications which will also be discussed.Keywords: Arabic, diglossia, engagement, motivation, multiglossia, sociolinguistics
Procedia PDF Downloads 166113 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds
Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel
Abstract:
Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction
Procedia PDF Downloads 168112 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 235111 Astronomy in the Education Area: A Narrative Review
Authors: Isabella Lima Leite de Freitas
Abstract:
The importance of astronomy for humanity is unquestionable. Despite being a robust science, capable of bringing new discoveries every day and quickly increasing the ability of researchers to understand the universe more deeply, scientific research in this area can also help in various applications outside the domain of astronomy. The objective of this study was to review and conduct a descriptive analysis of published studies that presented the importance of astronomy in the area of education. A narrative review of the literature has been performed, considering the articles published in the last five years. As astronomy involves the study of physics, chemistry, biology, mathematics and technology, one of the studies evaluated presented astronomy as the gateway to science, demonstrating the presence of astronomy in 52 school curricula in 37 countries, with celestial movement the dominant content area. Another intervention study, evaluating individuals aged 4-5 years, demonstrated that the attribution of personal characteristics to cosmic bodies, in addition to the use of comprehensive astronomy concepts, favored the learning of science in preschool-age children, considering the use of practical activities of accompaniment and free drawing. Aiming to measure scientific literacy, another study developed in Turkey, motivated the authorities of this country to change the teaching materials and curriculum of secondary schools after the term “astronomy” appeared as one of the most attractive subjects for young people aged 15 to 24. There are also reports in the literature of the use of pedagogical tools, such as the representation of the Solar System on a human scale, where students can walk along the orbits of the planets while studying the laws of dynamics. The use of this tool favored the teaching of the relationship between distance, duration and speed over the period of the planets, in addition to improving the motivation and well-being of students aged 14-16. An important impact of astronomy on education was demonstrated in the study that evaluated the participation of high school students in the Astronomical Olympiads and the International Astronomy Olympiad. The study concluded that these Olympics have considerable influence on students who pursue a career in teaching or research later on, many of whom are in the area of astronomy itself. In addition, the literature indicates that the teaching of astronomy in the digital age has facilitated the availability of data for researchers, but also for the general population. This fact can increase even more the curiosity that the astronomy area has always instilled in people and promote the dissemination of knowledge on an expanded scale. Currently, astronomy has been considered an important ally in strengthening the school curricula of children, adolescents and young adults. This has been used as teaching tools, in addition to being extremely useful for scientific literacy, being increasingly used in the area of education.Keywords: astronomy, education area, teaching, review
Procedia PDF Downloads 103110 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite
Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov
Abstract:
The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis
Procedia PDF Downloads 151109 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 89108 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy
Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu
Abstract:
Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films
Procedia PDF Downloads 255107 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest
Authors: Radu Deju, Carmen Mustata
Abstract:
The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.Keywords: decommissioning, integrated management system, nuclear reactor, waste management
Procedia PDF Downloads 289106 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter
Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM
Procedia PDF Downloads 395