Search results for: linear congruential algorithm
458 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context
Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez
Abstract:
Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes
Procedia PDF Downloads 60457 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 647456 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States
Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh
Abstract:
Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.Keywords: bistability, diabetes, feedback and crosstalk, obesity
Procedia PDF Downloads 276455 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 310454 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 100453 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions
Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran
Abstract:
Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators
Procedia PDF Downloads 179452 Family Cohesion, Social Networks, and Cultural Differences in Latino and Asian American Help Seeking Behaviors
Authors: Eileen Y. Wong, Katherine Jin, Anat Talmon
Abstract:
Background: Help seeking behaviors are highly contingent on socio-cultural factors such as ethnicity. Both Latino and Asian Americans underutilize mental health services compared to their White American counterparts. This difference may be related to the composite of one’s social support system, which includes family cohesion and social networks. Previous studies have found that Latino families are characterized by higher levels of family cohesion and social support, and Asian American families with greater family cohesion exhibit lower levels of help seeking behaviors. While both are broadly considered collectivist communities, within-culture variability is also significant. Therefore, this study aims to investigate the relationship between help seeking behaviors in the two cultures with levels of family cohesion and strength of social network. We also consider such relationships in light of previous traumatic events and diagnoses, particularly post-traumatic stress disorder (PTSD), to understand whether clinically diagnosed individuals differ in their strength of network and help seeking behaviors. Method: An adult sample (N = 2,990) from the National Latino and Asian American Study (NLAAS) provided data on participants’ social network, family cohesion, likelihood of seeking professional help, and DSM-IV diagnoses. T-tests compared Latino American (n = 1,576) and Asian American respondents (n = 1,414) in strength of social network, level of family cohesion, and likelihood of seeking professional help. Linear regression models were used to identify the probability of help-seeking behavior based on ethnicity, PTSD diagnosis, and strength of social network. Results: Help-seeking behavior was significantly associated with family cohesion and strength of social network. It was found that higher frequency of expressing one’s feelings with family significantly predicted lower levels of help-seeking behaviors (β = [-.072], p = .017), while higher frequency of spending free time with family significantly predicted higher levels of help-seeking behaviors (β = [.129], p = .002) in the Asian American sample. Subjective importance of family relations compared to that of one’s peers also significantly predict higher levels of help-seeking behaviors (β = [.095], p = .011) in the Asian American sample. Frequency of sharing one’s problems with relatives significantly predicted higher levels of help-seeking behaviors (β = [.113], p < .01) in the Latino American sample. A PTSD diagnosis did not have any significant moderating effect. Conclusion: Considering the underutilization of mental health services in Latino and Asian American minority groups, it is crucial to understand ways in which help seeking behavior can be encouraged. Our findings suggest that different dimensions within family cohesion and social networks have differential impacts on help-seeking behavior. Given the multifaceted nature of family cohesion and cultural relevance, the implications of our findings for theory and practice will be discussed.Keywords: family cohesion, social networks, Asian American, Latino American, help-seeking behavior
Procedia PDF Downloads 68451 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description
Authors: Branimir Jurun, Elza Jurun
Abstract:
The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement
Procedia PDF Downloads 198450 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow
Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar
Abstract:
Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow
Procedia PDF Downloads 206449 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 238448 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics
Procedia PDF Downloads 82447 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246446 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 285445 Characterization of New Sources of Maize (Zea mays L.) Resistance to Sitophilus zeamais (Coleoptera: Curculionidae) Infestation in Stored Maize
Authors: L. C. Nwosu, C. O. Adedire, M. O. Ashamo, E. O. Ogunwolu
Abstract:
The maize weevil, Sitophilus zeamais Motschulsky is a notorious pest of stored maize (Zea mays L.). The development of resistant maize varieties to manage weevils is a major breeding objective. The study investigated the parameters and mechanisms that confer resistance on a maize variety to S. zeamais infestation using twenty elite maize varieties. Detailed morphological, physical and chemical studies were conducted on whole-maize grain and the grain pericarp. Resistance was assessed at 33, 56, and 90 days post infestation using weevil mortality rate, weevil survival rate, percent grain damage, percent grain weight loss, weight of grain powder, oviposition rate and index of susceptibility as indices rated on a scale developed by the present study and on Dobie’s modified scale. Linear regression models that can predict maize grain damage in relation to the duration of storage were developed and applied. The resistant varieties identified particularly 2000 SYNEE-WSTR and TZBRELD3C5 with very high degree of resistance should be used singly or best in an integrated pest management system for the control of S. zeamais infestation in stored maize. Though increases in the physical properties of grain hardness, weight, length, and width increased varietal resistance, it was found that the bases of resistance were increased chemical attributes of phenolic acid, trypsin inhibitor and crude fibre while the bases of susceptibility were increased protein, starch, magnesium, calcium, sodium, phosphorus, manganese, iron, cobalt and zinc, the role of potassium requiring further investigation. Characters that conferred resistance on the test varieties were found distributed in the pericarp and the endosperm of the grains. Increases in grain phenolic acid, crude fibre, and trypsin inhibitor adversely and significantly affected the bionomics of the weevil on further assessment. The flat side of a maize grain at the point of penetration was significantly preferred by the weevil. Why the south area of the flattened side of a maize grain was significantly preferred by the weevil is clearly unknown, even though grain-face-type seemed to be a contributor in the study. The preference shown to the south area of the grain flat side has implications for seed viability. The study identified antibiosis, preference, antixenosis, and host evasion as the mechanisms of maize post harvest resistance to Sitophilus zeamais infestation.Keywords: maize weevil, resistant, parameters, mechanisms, preference
Procedia PDF Downloads 307444 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 127443 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 58442 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63441 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka
Authors: Sherly Shelton, Zhaohui Lin
Abstract:
In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature
Procedia PDF Downloads 132440 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 34439 Direct Assessment of Cellular Immune Responses to Ovalbumin with a Secreted Luciferase Transgenic Reporter Mouse Strain IFNγ-Lucia
Authors: Martyna Chotomska, Aleksandra Studzinska, Marta Lisowska, Justyna Szubert, Aleksandra Tabis, Jacek Bania, Arkadiusz Miazek
Abstract:
Objectives: Assessing antigen-specific T cell responses is of utmost importance for the pre-clinical testing of prototype vaccines against intracellular pathogens and tumor antigens. Mainly two types of in vitro assays are used for this purpose 1) enzyme-linked immunospot (ELISpot) and 2) intracellular cytokine staining (ICS). Both are time-consuming, relatively expensive, and require manual dexterity. Here, we assess if a straightforward detection of luciferase activity in blood samples of transgenic reporter mice expressing a secreted Lucia luciferase under the transcriptional control of IFN-γ promoter parallels the sensitivity of IFNγ ELISpot assay. Methods: IFN-γ-LUCIA mouse strain carrying multiple copies of Lucia luciferase transgene under the transcriptional control of IFNγ minimal promoter were generated by pronuclear injection of linear DNA. The specificity of transgene expression and mobilization was assessed in vitro using transgenic splenocytes exposed to various mitogens. The IFN-γ-LUCIA mice were immunized with 50mg of ovalbumin (OVA) emulsified in incomplete Freund’s adjuvant three times every two weeks by subcutaneous injections. Blood samples were collected before and five days after each immunization. Luciferase activity was assessed in blood serum. Peripheral blood mononuclear cells were separated and assessed for frequencies of OVA-specific IFNγ-secreting T cells. Results: We show that in vitro cultured splenocytes of IFN-γ-LUCIA mice respond by 2 and 3 fold increase in secreted luciferase activity to T cell mitogens concanavalin A and phorbol myristate acetate, respectively but fail to respond to B cell-stimulating E.coli lipopolysaccharide. Immunization of IFN-γ-LUCIA mice with OVA leads to over 4 fold increase in luciferase activity in blood serum five days post-immunization with a barely detectable increase in OVA-specific, IFNγ-secreting T cells by ELISpot. Second and third immunizations, further increase the luciferase activity and coincidently also increase the frequencies of OVA-specific T cells by ELISpot. Conclusions: We conclude that minimally invasive monitoring of luciferase secretions in blood serum of IFN-γ-LUCIA mice constitutes a sensitive method for evaluating primary and memory Th1 responses to protein antigens. As such, this method may complement existing methods for rapid immunogenicity assessment of prototype vaccines.Keywords: ELISpot, immunogenicity, interferon-gamma, reporter mice, vaccines
Procedia PDF Downloads 172438 A Geochemical Perspective on A-Type Granites of Khanak and Devsar Areas, Haryana, India: Implications for Petrogenesis
Authors: Naresh Kumar, Radhika Sharma, A. K. Singh
Abstract:
Granites from Khanak and Devsar areas, a part of Malani Igneous Suite (MIS) were investigated for their geochemical characteristics to understand the petrogenetic aspect of the research area. Neoproterozoic rocks of MIS are well exposed in Jhunjhunu, Jodhpur, Pali, Barmer, Jalor, Jaisalmer districts of Rajasthan and Bhiwani district of Haryana and also occur at Kirana hills of Pakistan. The MIS predominantly consists of acidic volcanic with acidic plutonic (granite of various types), mafic volcanic, mafic intrusive and minor amount of pyroclasts. Based on the field and petrographical studies, 28 samples were selected and analyzed for geochemical analysis of major, trace and rare earth elements at the Wadia Institute of Himalayan Geology, Dehradun by X-Ray Fluorescence Spectrometer (XRF) and ICP-MS (Inductively Coupled Plasma- Mass Spectrometry). Granites from the studied areas are categorized as grey, green and pink. Khanak granites consist of quartz, k-feldspar, plagioclase, and biotite as essential minerals and hematite, zircon, annite, monazite & rutile as accessory minerals. In Devsar granites, plagioclase is replaced by perthite and occurs as dominantly. Geochemically, granites from Khanak and Devsar areas exhibit typical A-type granites characteristics with their enrichment in SiO2, Na2O+K2O, Fe/Mg, Rb, Zr, Y, Th, U, REE (except Eu) and significant depletion in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu suggested about A-type affinities in Northwestern Peninsular India. The amount of heat production (HP) in green and grey granites of Devsar area varies upto 9.68 & 11.70 μWm-3 and total heat generation unit (HGU) i.e. 23.04 & 27.86 respectively. Pink granites of Khanak area display a higher enrichment of HP (16.53 μWm-3) and HGU (39.37) than the granites from Devsar area. Overall, they have much higher values of HP and HGU than the average value of continental crust (3.8 HGU), which imply a possible linear relationship among the surface heat flow and crustal heat generation in the rocks of MIS. Chondrite-normalized REE patterns show enriched LREE, moderate to strong negative Eu anomalies and more or less flat heavy REE. In primitive mantle-normalized multi-element variation diagrams, the granites show pronounced depletions in the high-field-strength elements (HFSE) Nb, Zr, Sr, P, and Ti. Geochemical characteristics (major, trace and REE) along with the use of various discrimination schemes revealed their probable correspondence to magma derived from the crustal origin by a different degree of partial melting.Keywords: A-type granite, neoproterozoic, Malani igneous suite, Khanak, Devsar
Procedia PDF Downloads 272437 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233436 Suggestions to the Legislation about Medical Ethics and Ethics Review in the Age of Medical Artificial Intelligence
Authors: Xiaoyu Sun
Abstract:
In recent years, the rapid development of Artificial Intelligence (AI) has extensively promoted medicine, pharmaceutical, and other related fields. The medical research and development of artificial intelligence by scientific and commercial organizations are on the fast track. The ethics review is one of the critical procedures of registration to get the products approved and launched. However, the SOPs for ethics review is not enough to guide the healthy and rapid development of artificial intelligence in healthcare in China. Ethical Review Measures for Biomedical Research Involving Human Beings was enacted by the National Health Commission of the People's Republic of China (NHC) on December 1st, 2016. However, from a legislative design perspective, it was neither updated timely nor in line with the trends of AI international development. Therefore, it was great that NHC published a consultation paper on the updated version on March 16th, 2021. Based on the most updated laws and regulations in the States and EU, and in-depth-interviewed 11 subject matter experts in China, including lawmakers, regulators, and key members of ethics review committees, heads of Regulatory Affairs in SaMD industry, and data scientists, several suggestions were proposed on top of the updated version. Although the new version indicated that the Ethics Review Committees need to be created by National, Provincial and individual institute levels, the review authorities of different levels were not clarified. The suggestion is that the precise scope of review authorities for each level should be identified based on Risk Analysis and Management Model, such as the complicated leading technology, gene editing, should be reviewed by National Ethics Review Committees, it will be the job of individual institute Ethics Review Committees to review and approve the clinical study with less risk such as an innovative cream to treat acne. Furthermore, to standardize the research and development of artificial intelligence in healthcare in the age of AI, more clear guidance should be given to data security in the layers of data, algorithm, and application in the process of ethics review. In addition, transparency and responsibility, as two of six principles in the Rome Call for AI Ethics, could be further strengthened in the updated version. It is the shared goal among all countries to manage well and develop AI to benefit human beings. Learned from the other countries who have more learning and experience, China could be one of the most advanced countries in artificial intelligence in healthcare.Keywords: biomedical research involving human beings, data security, ethics committees, ethical review, medical artificial intelligence
Procedia PDF Downloads 168435 Towards a Strategic Framework for State-Level Epistemological Functions
Authors: Mark Darius Juszczak
Abstract:
While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program
Procedia PDF Downloads 77434 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery
Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats
Abstract:
Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform
Procedia PDF Downloads 457433 Relationship of Macro-Concepts in Educational Technologies
Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez
Abstract:
This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language
Procedia PDF Downloads 199432 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure
Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu
Abstract:
In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency
Procedia PDF Downloads 178431 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)
Authors: Ali Pourkazemi
Abstract:
The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies
Procedia PDF Downloads 69430 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 232429 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 146