Search results for: seize capability
137 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 139136 Advancing Food System Resilience by Pseudocereals Utilization
Authors: Yevheniia Varyvoda, Douglas Taren
Abstract:
At the aggregate level, climate variability, the rising number of active violent conflicts, globalization and industrialization of agriculture, the loss in diversity of crop species, the increase in demand for agricultural production, and the adoption of healthy and sustainable dietary patterns are exacerbating factors of food system destabilization. The importance of pseudocereals to fuel and sustain resilient food systems is recognized by leading organizations working to end hunger, particularly for their critical capability to diversify livelihood portfolios and provide plant-sourced healthy nutrition in the face of systemic shocks and stresses. Amaranth, buckwheat, and quinoa are the most promising and used pseudocereals for ensuring food system resilience in the reality of climate change due to their high nutritional profile, good digestibility, palatability, medicinal value, abiotic stress tolerance, pest and disease resistance, rapid growth rate, adaptability to marginal and degraded lands, high genetic variability, low input requirements, and income generation capacity. The study provides the rationale and examples of advancing local and regional food systems' resilience by scaling up the utilization of amaranth, buckwheat, and quinoa along all components of food systems to architect indirect nutrition interventions and climate-smart approaches. Thus, this study aims to explore the drivers for ancient pseudocereal utilization, the potential resilience benefits that can be derived from using them, and the challenges and opportunities for pseudocereal utilization within the food system components. The PSALSAR framework regarding the method for conducting systematic review and meta-analysis for environmental science research was used to answer these research questions. Nevertheless, the utilization of pseudocereals has been slow for a number of reasons, namely the increased production of commercial and major staples such as maize, rice, wheat, soybean, and potato, the displacement due to pressure from imported crops, lack of knowledge about value-adding practices in food supply chain, limited technical knowledge and awareness about nutritional and health benefits, absence of marketing channels and limited access to extension services and information about resilient crops. The success of climate-resilient pathways based on pseudocereal utilization underlines the importance of co-designed activities that use modern technologies, high-value traditional knowledge of underutilized crops, and a strong acknowledgment of cultural norms to increase community-level economic and food system resilience.Keywords: resilience, pseudocereals, food system, climate change
Procedia PDF Downloads 79135 Examining the Relationship Between Green Procurement Practices and Firm’s Performance in Ghana
Authors: Clement Yeboah
Abstract:
Prior research concludes that environmental commitment positively drives organisational performance. Nonetheless, the nexus and conditions under which environmental commitment capabilities contribute to a firm’s performance are less understood. The purpose of this quantitative relational study was to examine the relationship between environmental commitment and 500 firms’ performances in Ghana. The researchers further seek to draw insights from the resource-based view to conceptualize environmental commitment and green procurement practices as resource capabilities to enhance firm performance. The researchers used insights from the contingent resource-based view to examine green leadership orientation conditions under which environmental commitment capability contributes to firm performance through green procurement practices. The study’s conceptual framework was tested on primary data from some firms in the Ghanaian market. PROCESS Macro was used to test the study’s hypotheses. Beyond that, green procurement practices mediated the association between environmental commitment capabilities and the firm’s performance. The study further seeks to find out whether green leadership orientation positively moderates the indirect relationship between environmental commitment capabilities and firm performance through green procurement practices. While conventional wisdom suggests that improved environmental commitment capabilities help improve a firm’s performance, this study tested this presumed relationship between environmental commitment capabilities and firm performance and provides theoretical arguments and empirical evidence to justify how green procurement practices uniquely and in synergy with green leadership orientation transform this relationship. The study results indicated a positive correlation between environmental commitment and firm performance. This result suggests that firms that prioritize environmental sustainability and demonstrate a strong commitment to environmentally responsible practices tend to experience better overall performance. This includes financial gains, operational efficiency, enhanced reputation, and improved relationships with stakeholders. The study's findings inform policy formulation in Ghana related to environmental regulations, incentives, and support mechanisms. Policymakers can use the insights to design policies that encourage and reward firms for their environmental commitments, thereby fostering a more sustainable and environmentally responsible business environment. The findings from such research can influence the design and development of educational programs in Ghana, specifically in fields related to sustainability, environmental management, and corporate social responsibility (CSR). Institutions may consider integrating environmental and sustainability topics into their business and management courses to create awareness and promote responsible practices among future business professionals. Also the study results can also promote the adoption of environmental accounting practices in Ghana. By recognizing and measuring the environmental impacts and costs associated with business activities, firms can better understand the financial implications of their environmental commitments and develop strategies for improved performance.Keywords: firm’s performance, green procurement practice, environmental commitment, environmental management, sustainability
Procedia PDF Downloads 85134 Validation of Global Ratings in Clinical Performance Assessment
Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek
Abstract:
This study aimed to determine the reliability of clinical performance assessments, having been emphasized by ability-based education, and professors overall assessment methods. We addressed the following problems: First, we try to find out whether there is a difference in what we consider to be the main variables affecting the clinical performance test according to the evaluator’s working period and the number of evaluation experience. Second, we examined the relationship among the global rating score (G), analytic global rating score (Gc), and the sum of the analytical checklists (C). What are the main factors affecting clinical performance assessments in relation to the numbers of times the evaluator had administered evaluations and the length of their working period service? What is the relationship between overall assessment score and analytic checklist score? How does analytic global rating with 6 components in OSCE and 4 components in sub-domains (Gc) CPX: aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude overall assessment score and task-specific analytic checklist score sum (C) affect the professor’s overall global rating assessment score (G)? We studied 75 professors who attended a 2016 Bugyeoung Consortium clinical skills performances test evaluating third and fourth year medical students at the Pusan National University Medical school in South Korea (39 prof. in OSCE, 36 prof. in CPX; all consented to participate in our study). Each evaluator used 3 forms; a task-specific analytic checklist, subsequent analytic global rating scale with sub-6 domains, and overall global scale. After the evaluation, the professors responded to the questionnaire on the important factors of clinical performance assessment. The data were analyzed by frequency analysis, correlation analysis, and hierarchical regression analysis using SPSS 21.0. Their understanding of overall assessment was analyzed by dividing the subjects into groups based on experiences. As a result, they considered ‘precision’ most important in overall OSCE assessment, and ‘precise accuracy physical examination’, ‘systemic approaches to taking patient history’, and ‘diagnostic skill capability’ in overall CPX assessment. For OSCE, there was no clear difference of opinion about the main factors, but there was for CPX. Analytic global rating scale score, overall rating scale score, and analytic checklist score had meaningful mutual correlations. According to the regression analysis results, task-specific checklist score sum had the greatest effect on overall global rating. professors regarded task-specific analytic checklist total score sum as best reflecting overall OSCE test score, followed by aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude on a subsequent analytic global rating scale. For CPX, subsequent analytic global rating scale score, overall global rating scale score, and task-specific checklist score had meaningful mutual correlations. These findings support explanations for validity of professors’ global rating in clinical performance assessment.Keywords: global rating, clinical performance assessment, medical education, analytic checklist
Procedia PDF Downloads 235133 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 137132 Environmental Literacy of Teacher Educators in Colleges of Teacher Education in Israel
Authors: Tzipi Eshet
Abstract:
The importance of environmental education as part of a national strategy to promote the environment is recognized around the world. Lecturers at colleges of teacher education have considerable responsibility, directly and indirectly, for the environmental literacy of students who will end up teaching in the school system. This study examined whether lecturers in colleges of teacher education and teacher training in Israel, are able and willing to develop among the students, environmental literacy. Capability and readiness is assessed by evaluating the level of environmental literacy dimensions that include knowledge on environmental issues, positions related to the environmental agenda and "green" patterns of behavior in everyday life. The survey included 230 lecturers from 22 state colleges coming from various sectors (secular, religious, and Arab), from different academic fields and different personal backgrounds. Firstly, the results show that the higher the commitment to environmental issues, the lower the satisfaction with the current situation. In general, the respondents show positive environmental attitudes in all categories examined, they feel that they can personally influence responsible environmental behavior of others and are able to internalize environmental education in schools and colleges; they also report positive environmental behavior. There are no significant differences between teachers of different background characteristics when it comes to behavior patterns that generate personal income funds (e.g. returning bottles for deposit). Women show a more responsible environmental behavior than men. Jewish lecturers, in most categories, show more responsible behavior than Druze and Arab lecturers; however, when referring to positions, Arabs and Druze have a better sense in their ability to influence the environmental agenda. The Knowledge test, which included 15 questions, was mostly based on basic environmental issues. The average score was adequate - 83.6. Science lecturers' environmental literacy is higher than the other lecturers significantly. The larger the environmental knowledge base is, they are more environmental in their attitudes, and they feel more responsible toward the environment. It can be concluded from the research findings, that knowledge is a fundamental basis for developing environmental literacy. Environmental knowledge has a positive effect on the development of environmental commitment that is reflected in attitudes and behavior. This conclusion is probably also true of the general public. Hence, there is a great importance to the expansion of knowledge among the general public and teacher educators in particular on environmental. From the open questions in the survey, it is evident that most of the lecturers are interested in the subject and understand the need to integrate environmental issues in the colleges, either directly by teaching courses on the environment or indirectly by integrating environmental issues in different professions as well as asking the students to set an example (such as, avoid unnecessary printing, keeping the environment clean). The curriculum at colleges should include a variety of options for the development and enhancement of environmental literacy of student teachers, but first there must be a focus on bringing their teachers to a high literacy level so they can meet the difficult and important task they face.Keywords: colleges of teacher education, environmental literacy, environmental education, teacher's teachers
Procedia PDF Downloads 284131 Analysis of Digital Transformation in Banking: The Hungarian Case
Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi
Abstract:
The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.Keywords: big data, digital transformation, dynamic capabilities, mobile banking
Procedia PDF Downloads 64130 Nursing Preceptors' Perspectives of Assessment Competency
Authors: Watin Alkhelaiwi, Iseult Wilson, Marian Traynor, Katherine Rogers
Abstract:
Clinical nursing education allows nursing students to gain essential knowledge from practice experience and develop nursing skills in a variety of clinical environments. Integrating theoretical knowledge and practical skills is made easier for nursing students by providing opportunities for practice in a clinical environment. Nursing competency is an essential capability required to fulfill nursing responsibilities. Effective mentoring in clinical settings helps nursing students develop the necessary competence and promotes the integration of theory and practice. Preceptors play a considerable role in clinical nursing education, including the supervision of nursing students undergoing a rigorous clinical practicum. Preceptors are also involved in the clinical assessment of nursing students’ competency. The assessment of nursing students’ competence by professional practitioners is essential to investigate whether nurses have developed an adequate level of competence to deliver safe nursing care. Competency assessment remains challenging among nursing educators and preceptors, particularly owing to the complexity of the process. Consistency in terms of assessment methods and tools and valid and reliable assessment tools for measuring competence in clinical practice are lacking. Nurse preceptors must assess students’ competencies to prepare them for future professional responsibilities. Preceptors encounter difficulties in the assessment of competency owing to the nature of the assessment process, lack of standardised assessment tools, and a demanding clinical environment. The purpose of the study is to examine nursing preceptors’ experiences of assessing nursing interns’ competency in Saudi Arabia. There are three objectives in this study; the first objective is to examine the preceptors’ view of the Saudi assessment tool in relation to preceptorship, assessment, the assessment tool, the nursing curriculum, and the grading system. The second and third objectives are to examine preceptors’ view of "competency'' in nursing and their interpretations of the concept of competency and to assess the implications of the research in relation to the Saudi 2030 vision. The study uses an exploratory sequential mixed-methods design that involves a two-phase project: a qualitative focus group study is conducted in phase 1, and a quantitative study- a descriptive cross-sectional design (online survey) is conducted in phase 2. The results will inform the preceptors’ view of the Saudi assessment tool in relation to specific areas, including preceptorship and how the preceptors are prepared to be assessors, and assessment and assessment tools through identifying the appropriateness of the instrument for clinical practice. The results will also inform the challenges and difficulties that face the preceptors. These results will be analysed thematically for the focus group interview data, and SPSS software will be used for the analysis of the online survey data.Keywords: clinical assessment tools, clinical competence, competency assessment, mentor, nursing, nurses, preceptor
Procedia PDF Downloads 66129 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165128 In vitro Regeneration of Neural Cells Using Human Umbilical Cord Derived Mesenchymal Stem Cells
Authors: Urvi Panwar, Kanchan Mishra, Kanjaksha Ghosh, ShankerLal Kothari
Abstract:
Background: Day-by-day the increasing prevalence of neurodegenerative diseases have become a global issue to manage them by medical sciences. The adult neural stem cells are rare and require an invasive and painful procedure to obtain it from central nervous system. Mesenchymal stem cell (MSCs) therapies have shown remarkable application in treatment of various cell injuries and cell loss. MSCs can be derived from various sources like adult tissues, human bone marrow, umbilical cord blood and cord tissue. MSCs have similar proliferation and differentiation capability, but the human umbilical cord-derived mesenchymal stem cells (hUCMSCs) are proved to be more beneficial with respect to cell procurement, differentiation to other cells, preservation, and transplantation. Material and method: Human umbilical cord is easily obtainable and non-controversial comparative to bone marrow and other adult tissues. The umbilical cord can be collected after delivery of baby, and its tissue can be cultured using explant culture method. Cell culture medium such as DMEMF12+10% FBS and DMEMF12+Neural growth factors (bFGF, human noggin, B27) with antibiotics (Streptomycin/Gentamycin) were used to culture and differentiate mesenchymal stem cells into neural cells, respectively. The characterisations of MSCs were done with Flow Cytometer for surface markers CD90, CD73 and CD105 and colony forming unit assay. The differentiated various neural cells will be characterised by fluorescence markers for neurons, astrocytes, and oligodendrocytes; quantitative PCR for genes Nestin and NeuroD1 and Western blotting technique for gap43 protein. Result and discussion: The high quality and number of MSCs were isolated from human umbilical cord via explant culture method. The obtained MSCs were differentiated into neural cells like neurons, astrocytes and oligodendrocytes. The differentiated neural cells can be used to treat neural injuries and neural cell loss by delivering cells by non-invasive administration via cerebrospinal fluid (CSF) or blood. Moreover, the MSCs can also be directly delivered to different injured sites where they differentiate into neural cells. Therefore, human umbilical cord is demonstrated to be an inexpensive and easily available source for MSCs. Moreover, the hUCMSCs can be a potential source for neural cell therapies and neural cell regeneration for neural cell injuries and neural cell loss. This new way of research will be helpful to treat and manage neural cell damages and neurodegenerative diseases like Alzheimer and Parkinson. Still the study has a long way to go but it is a promising approach for many neural disorders for which at present no satisfactory management is available.Keywords: bone marrow, cell therapy, explant culture method, flow cytometer, human umbilical cord, mesenchymal stem cells, neurodegenerative diseases, neuroprotective, regeneration
Procedia PDF Downloads 202127 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 59126 Effect of Silica Nanoparticles on Three-Point Flexural Properties of Isogrid E-Glass Fiber/Epoxy Composite Structures
Authors: Hamed Khosravi, Reza Eslami-Farsani
Abstract:
Increased interest in lightweight and efficient structural components has created the need for selecting materials with improved mechanical properties. To do so, composite materials are being widely used in many applications, due to durability, high strength and modulus, and low weight. Among the various composite structures, grid-stiffened structures are extensively considered in various aerospace and aircraft applications, because of higher specific strength and stiffness, higher impact resistance, superior load-bearing capacity, easy to repair, and excellent energy absorption capability. Although there are a good number of publications on the design aspects and fabrication of grid structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Therefore, the aim of this research is to study the reinforcing effect of silica nanoparticles on the flexural properties of epoxy/E-glass isogrid panels under three-point bending test. Samples containing 0, 1, 3, and 5 wt.% of the silica nanoparticles, with 44 and 48 vol.% of the glass fibers in the ribs and skin components respectively, were fabricated by using a manual filament winding method. Ultrasonic and mechanical routes were employed to disperse the nanoparticles within the epoxy resin. To fabricate the ribs, the unidirectional fiber rovings were impregnated with the matrix mixture (epoxy + nanoparticles) and then laid up into the grooves of a silicone mold layer-by-layer. At once, four plies of woven fabrics, after impregnating into the same matrix mixture, were layered on the top of the ribs to produce the skin part. In order to conduct the ultimate curing and to achieve the maximum strength, the samples were tested after 7 days of holding at room temperature. According to load-displacement graphs, the bellow trend was observed for all of the samples when loaded from the skin side; following an initial linear region and reaching a load peak, the curve was abruptly dropped and then showed a typical absorbed energy region. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. The results showed that the flexural properties of the nanocomposite samples were always higher than those of the nanoparticle-free sample. The maximum enhancement in flexural maximum load and energy absorption was found to be for the incorporation of 3 wt.% of the nanoparticles. Furthermore, the flexural stiffness was continually increased by increasing the silica loading. In conclusion, this study suggested that the addition of nanoparticles is a promising method to improve the flexural properties of grid-stiffened fibrous composite structures.Keywords: grid-stiffened composite structures, nanocomposite, three point flexural test , energy absorption
Procedia PDF Downloads 341125 Effectiveness, Safety, and Tolerability Profile of Stribild® in HIV-1-infected Patients in the Clinical Setting
Authors: Heiko Jessen, Laura Tanus, Slobodan Ruzicic
Abstract:
Objectives: The efficacy of Stribild®, an integrase strand transfer inhibitor (INSTI) -based STR, has been evaluated in randomized clinical trials and it has demonstrated durable capability in terms of achieving sustained suppression of HIV-1 RNA-levels. However, differences in monitoring frequency, existing selection bias and profile of patients enrolled in the trials, may all result in divergent efficacy of this regimen in routine clinical settings. The aim of this study was to assess the virologic outcomes, safety and tolerability profile of Stribild® in a routine clinical setting. Methods: This was a retrospective monocentric analysis on HIV-1-infected patients, who started with or were switched to Stribild®. Virological failure (VF) was defined as confirmed HIV-RNA>50 copies/ml. The minimum time of follow-up was 24 weeks. The percentage of patients remaining free of therapeutic failure was estimated using the time-to-loss-of-virologic-response (TLOVR) algorithm, by intent-to-treat analysis. Results: We analyzed the data of 197 patients (56 ART-naïve and 141 treatment-experienced patients), who fulfilled the inclusion criteria. Majority (95.9%) of patients were male. The median time of HIV-infection at baseline was 2 months in treatment-naïve and 70 months in treatment-experienced patients. Median time [IQR] under ART in treatment-experienced patients was 37 months. Among the treatment-experienced patients 27.0% had already been treated with a regimen consisting of two NRTIs and one INSTI, whereas 18.4% of them experienced a VF. The median time [IQR] of virological suppression prior to therapy with Stribild® in the treatment-experienced patients was 10 months [0-27]. At the end of follow-up (median 33 months), 87.3% (95% CI, 83.5-91.2) of treatment-naïve and 80.3% (95% CI, 75.8-84.8) of treatment-experienced patients remained free of therapeutic failure. Considering only treatment-experienced patients with baseline VL<50 copies/ml, 83.0% (95% CI, 78.5-87.5) remained free of therapeutic failure. A total of 17 patients stopped treatment with Stribild®, 5.4% (3/56) of them were treatment-naïve and 9.9% (14/141) were treatment-experienced patients. The Stribild® therapy was discontinued in 2 (1.0%) because of VF, loss to follow-up in 4 (2.0%), and drug-drug interactions in 2 (1.0%) patients. Adverse events were in 7 (3.6%) patients the reason to switch from therapy with Stribild® and further 2 (1.0%) patients decided personally to switch. The most frequently observed adverse events were gastrointestinal side effects (20.0%), headache (8%), rash events (7%) and dizziness (6%). In two patients we observed an emergence of novel resistances in integrase-gene. The N155H evolved in one patient and resulted in VF. In another patient S119R evolved either during or shortly upon switch from therapy with Stribild®. In one further patient with VF two novel mutations in the RT-gene were observed when compared to historical genotypic test result (V106I/M and M184V), whereby it is not clear whether they evolved during or already before the switch to Stribild®. Conclusions: Effectiveness of Stribild® for treatment-naïve patients was consistent with data obtained in clinical trials. The safety and tolerability profile as well as resistance development confirmed clinical efficacy of Stribild® in a daily practice setting.Keywords: ART, HIV, integrase inhibitor, stribild
Procedia PDF Downloads 285124 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 104123 Sorption Properties of Hemp Cellulosic Byproducts for Petroleum Spills and Water
Authors: M. Soleimani, D. Cree, C. Chafe, L. Bates
Abstract:
The accidental release of petroleum products into the environment could have harmful consequences to our ecosystem. Different techniques such as mechanical separation, membrane filtration, incineration, treatment processes using enzymes and dispersants, bioremediation, and sorption process using sorbents have been applied for oil spill remediation. Most of the techniques investigated are too costly or do not have high enough efficiency. This study was conducted to determine the sorption performance of hemp byproducts (cellulosic materials) in terms of sorption capacity and kinetics for hydrophobic and hydrophilic fluids. In this study, heavy oil, light oil, diesel fuel, and water/water vapor were used as sorbate fluids. Hemp stalk in different forms, including loose material (hammer milled (HM) and shredded (Sh) with low bulk densities) and densified forms (pellet form (P) and crumbled pellets (CP)) with high bulk densities, were used as sorbents. The sorption/retention tests were conducted according to ASTM 726 standard. For a quick-purpose application of the sorbents, the sorption tests were conducted for 15 min, and for an ideal sorption capacity of the materials, the tests were carried out for 24 h. During the test, the sorbent material was exposed to the fluid by immersion, followed by filtration through a stainless-steel wire screen. Water vapor adsorption was carried out in a controlled environment chamber with the capability of controlling relative humidity (RH) and temperature. To determine the kinetics of sorption for each fluid and sorbent, the retention capacity also was determined intervalley for up to 24 h. To analyze the kinetics of sorption, pseudo-first-order, pseudo-second order and intraparticle diffusion models were employed with the objective of minimal deviation of the experimental results from the models. The results indicated that HM and Sh materials had the highest sorption capacity for the hydrophobic fluids with approximately 6 times compared to P and CP materials. For example, average retention values of heavy oil on HM and Sh was 560% and 470% of the mass of the sorbents, respectively. Whereas, the retention of heavy oil on P and CP was up to 85% of the mass of the sorbents. This lower sorption capacity for P and CP can be due to the less exposed surface area of these materials and compacted voids or capillary tubes in the structures. For water uptake application, HM and Sh resulted in at least 40% higher sorption capacity compared to those obtained for P and CP. On average, the performance of sorbate uptake from high to low was as follows: water, heavy oil, light oil, diesel fuel. The kinetic analysis indicated that the second-pseudo order model can describe the sorption process of the oil and diesel better than other models. However, the kinetics of water absorption was better described by the pseudo-first-order model. Acetylation of HM materials could improve its oil and diesel sorption to some extent. Water vapor adsorption of hemp fiber was a function of temperature and RH, and among the models studied, the modified Oswin model was the best model in describing this phenomenon.Keywords: environment, fiber, petroleum, sorption
Procedia PDF Downloads 124122 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108121 Molecular Dynamics Simulation Study of the Influence of Potassium Salts on the Adsorption and Surface Hydration Inhibition Performance of Hexane, 1,6 - Diamine Clay Mineral Inhibitor onto Sodium Montmorillonite
Authors: Justine Kiiza, Xu Jiafang
Abstract:
The world’s demand for energy is increasing rapidly due to population growth and a reduction in shallow conventional oil and gas reservoirs, resorting to deeper and mostly unconventional reserves like shale oil and gas. Most shale formations contain a large amount of expansive sodium montmorillonite (Na-Mnt), due to high water adsorption, hydration, and when the drilling fluid filtrate enters the formation with high Mnt content, the wellbore wall can be unstable due to hydration and swelling, resulting to shrinkage, sticking, balling, time wasting etc., and well collapse in extreme cases causing complex downhole accidents and high well costs. Recently, polyamines like 1, 6 – hexane diamine (HEDA) have been used as typical drilling fluid shale inhibitors to minimize and/or cab clay mineral swelling and maintain the wellbore stability. However, their application is limited to shallow drilling due to their sensitivity to elevated temperature and pressure. Inorganic potassium salts i.e., KCl, have long been applied for restriction of shale formation hydration expansion in deep wells, but their use is limited due to toxicity. Understanding the adsorption behaviour of HEDA on Na-Mnt surfaces in present of organo-salts, organic K-salts e.g., HCO₂K - main component of organo-salt drilling fluid, is of great significance in explaining the inhibitory performance of polyamine inhibitors. Molecular dynamic simulations (MD) were applied to investigate the influence of HCO₂K and KCl on the adsorption mechanism of HEDA on the Na-Mnt surface. Simulation results showed that adsorption configurations of HEDA are mainly by terminal amine groups with a flat-lying alkyl hydrophobic chain. Its interaction with the clay surface decreased the H-bond number between H₂O-clay and neutralized the negative charge of the Mnt surface, thus weakening the surface hydration ability of Na-Mnt. The introduction of HCO₂K greatly improved inhibition ability, coordination of interlayer ions with H₂O as they were replaced by K+, and H₂O-HCOO- coordination reduced H₂O-Mnt interactions, mobility and transport capability of H₂O molecules were more decreased. While KCl showed little ability and also caused more hydration with time, HCO₂K can be used as an alternative for offshore drilling instead of toxic KCl, with a maximum concentration noted in this study as 1.65 wt%. This study provides a theoretical elucidation for the inhibition mechanism and adsorption characteristics of HEDA inhibitor on Na-Mnt surfaces in the presence of K+-salts and may provide more insight into the evaluation, selection, and molecular design of new clay-swelling high-performance WBDF systems used in oil and gas complex offshore drilling well sections.Keywords: shale, hydration, inhibition, polyamines, organo-salts, simulation
Procedia PDF Downloads 47120 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations
Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis
Abstract:
Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling
Procedia PDF Downloads 262119 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 61118 Research Project on Learning Rationality in Strategic Behaviors: Interdisciplinary Educational Activities in Italian High Schools
Authors: Giovanna Bimonte, Luigi Senatore, Francesco Saverio Tortoriello, Ilaria Veronesi
Abstract:
The education process considers capabilities not only to be seen as a means to a certain end but rather as an effective purpose. Sen's capability approach challenges human capital theory, which sees education as an ordinary investment undertaken by individuals. A complex reality requires complex thinking capable of interpreting the dynamics of society's changes to be able to make decisions that can be rational for private, ethical and social contexts. Education is not something removed from the cultural and social context; it exists and is structured within it. In Italy, the "Mathematical High School Project" is a didactic research project is based on additional laboratory courses in extracurricular hours where mathematics intends to bring itself in a dialectical relationship with other disciplines as a cultural bridge between the two cultures, the humanistic and the scientific ones, with interdisciplinary educational modules on themes of strong impact in younger life. This interdisciplinary mathematics presents topics related to the most advanced technologies and contemporary socio-economic frameworks to demonstrate how mathematics is not only a key to reading but also a key to resolving complex problems. The recent developments in mathematics provide the potential for profound and highly beneficial changes in mathematics education at all levels, such as in socio-economic decisions. The research project is built to investigate whether repeated interactions can successfully promote cooperation among students as rational choice and if the skill, the context and the school background can influence the strategies choice and the rationality. A Laboratory on Game Theory as mathematical theory was conducted in the 4th year of the Mathematical High Schools and in an ordinary scientific high school of the Scientific degree program. Students played two simultaneous games of repeated Prisoner's Dilemma with an indefinite horizon, with two different competitors in each round; even though the competitors in each round will remain the same for the duration of the game. The results highlight that most of the students in the two classes used the two games with an immunization strategy against the risk of losing: in one of the games, they started by playing Cooperate, and in the other by the strategy of Compete. In the literature, theoretical models and experiments show that in the case of repeated interactions with the same adversary, the optimal cooperation strategy can be achieved by tit-for-tat mechanisms. In higher education, individual capacities cannot be examined independently, as conceptual framework presupposes a social construction of individuals interacting and competing, making individual and collective choices. The paper will outline all the results of the experimentation and the future development of the research.Keywords: game theory, interdisciplinarity, mathematics education, mathematical high school
Procedia PDF Downloads 74117 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 234116 Corporate Governance and Disclosure Practices of Listed Companies in the ASEAN: A Conceptual Overview
Authors: Chen Shuwen, Nunthapin Chantachaimongkol
Abstract:
Since the world has moved into a transitional period, known as globalization; the business environment is now more complicated than ever before. Corporate information has become a matter of great importance for stakeholders, in order to understand the current situation. As a result of this, the concept of corporate governance has been broadly introduced to manage and control the affairs of corporations while businesses are required to disclose both financial and non-financial information to public via various communication channels such as the annual report, the financial report, the company’s website, etc. However, currently there are several other issues related to asymmetric information such as moral hazard or adverse selection that still occur intensively in workplaces. To prevent such problems in the business, it is required to have an understanding of what factors strengthen their transparency, accountability, fairness, and responsibility. Under aforementioned arguments, this paper aims to propose a conceptual framework that enables an investigation on how corporate governance mechanism influences disclosure efficiency of listed companies in the Association of Southeast Asia Nations (ASEAN) and the factors that should be considered for further development of good behaviors, particularly in regards to voluntary disclosure practices. To achieve its purpose, extensive reviews of literature are applied as a research methodology. It is divided into three main steps. Firstly, the theories involved with both corporate governance and disclosure practices such as agency theory, contract theory, signaling theory, moral hazard theory, and information asymmetry theory are examined to provide theoretical backgrounds. Secondly, the relevant literatures based on multi- perspectives of corporate governance, its attributions and their roles on business processes, the influences of corporate governance mechanisms on business performance, and the factors determining corporate governance characteristics as well as capability are reviewed to outline the parameters that should be included in the proposed model. Thirdly, the well-known regulatory document OECD principles and previous empirical studies on the corporate disclosure procedures are evaluated to identify the similarities and differentiations with the disclosure patterns in the ASEAN. Following the processes and consequences of the literature review, abundant factors and variables are found. Further to the methodology, additional critical factors that also have an impact on the disclosure behaviors are addressed in two groups. In the first group, the factors which are linked to the national characteristics - the quality of national code, legal origin, culture, the level of economic development, and so forth. Whereas in the second group, the discoveries which refer to the firm’s characteristics - ownership concentration, ownership’s rights, controlling group, and so on. However, because of research limitations, only some literature are chosen and summarized to form part of the conceptual framework that explores the relationship between corporate governance and the disclosure practices of listed companies in ASEAN.Keywords: corporate governance, disclosure practice, ASEAN, listed company
Procedia PDF Downloads 192115 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment
Authors: Ryan C. Igama
Abstract:
The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response
Procedia PDF Downloads 93114 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives
Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali
Abstract:
Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey
Procedia PDF Downloads 233113 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 112112 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 202111 Communication Skills for Physicians: Adaptation to the Third Gender and Language Cross Cultural Influences
Authors: Virginia Guillén Cañas, Miren Agurtzane Ortiz-Jauregi, Sonia Ruiz De Azua, Naiara Ozamiz
Abstract:
We want to focus on relationship of the communicational skills in several key aspects of medicine. The most relevant competencies of a health professional are an adequate communication capacity, which will influence the satisfaction of professionals and patients, therapeutic compliance, conflict prevention, clinical outcomes’ improvement and efficiency of health services. We define empathy as it as Sympathy and connection to others and capability to communicate this understanding. Some outcomes favoring empathy are female gender, younger age, and specialty choice. Third gender or third sex is a concept in which allows a person not to be categorized in a dual way but as a continuous variable, giving the choice of moving along it. This point of view recognizes three or more genders. The subject of Ethics and Clinical Communication is dedicated to sensitizing students about the importance and effectiveness of a good therapeutic relationship. We are also interested in other communicational aspects related to empathy as active listening, assertivity and basic and advanced Social Skills. Objectives: 1. To facilitate the approach of the student in the Medicine Degree to the reality of the medical profession 2. Analyze interesting outcome variables in communication 3. Interactive process to detect the areas of improvement in the learning process of the Physician throughout his professional career needs. Design: A comparative study with a cross-sectional approach was conducted in successive academic year cohorts of health professional students at a public Basque university. Four communicational aspects were evaluated through these questionnaires in Basque, Spanish and English: The active listening questionnaire, the TECA empathy questionnaire, the ACDA questionnaire and the EHS questionnaire Social Skills Scale. Types of interventions for improving skills: Interpersonal skills training intervention, Empathy intervention, Writing about experiential learning, Drama through role plays, Communicational skills training, Problem-based learning, Patient interviews ´videos, Empathy-focused training, Discussion. Results: It identified the need for a cross cultural adaptation and no gender distinction. The students enjoyed all the techniques in comparison to the usual master class. There was medium participation but these participative methodologies are not so usual in the university. According to empathy, men have a greater empathic capacity to fully understand women (p < 0.05) With regard to assertiveness there have been no differences between men and women in self-assertiveness but nevertheless women are more heteroassertive than men. Conclusions: These findings suggest that educational interventions with adequate feedback can be effective in maintaining and enhancing empathy in undergraduate medical students.Keywords: physician's communicational skills, patient satisfaction, third gender, cross cultural adaptation
Procedia PDF Downloads 192110 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model
Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom
Abstract:
Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model
Procedia PDF Downloads 27109 The Enquiry of Food Culture Products, Practices and Perspectives: An Action Research on Teaching and Learning Food Culture from International Food Documentary Films
Authors: Tsuiping Chen
Abstract:
It has always been an international consensus that food forms a big part of any culture since the old times. However, this idea has not been globally concretized until the announcement of including food or cuisine as intangible cultural heritage by UNESCO in 2010. This announcement strengthens the value of food culture, which is getting more and more notice by every country. Although Taiwan is not one of the members of the United Nations, we cannot detach ourselves from this important global trend, especially when we have a lot of culinary students expected to join the world culinary job market. These students should have been well educated with the knowledge of world food culture to make them have the sensibility and perspectives for the occurring global food issues before joining the culinary jobs. Under the premise of the above concern, the researcher and also the instructor took on action research with one class of students in the 'Food Culture' course watching, discussing, and analyzing 12 culinary documentary films selected from one decade’s (2007-2016) of Berlin Culinary Cinema in one semester of class hours. In addition, after class, the students separated themselves into six groups and joined 12 times of one-hour-long focus group discussion on the 12 films conducted by the researcher. Furthermore, during the semester, the students submitted their reflection reports on each film to the university e-portfolio system. All the focus discussions and reflection reports were recorded and collected for further analysis by the researcher and one invited film researcher. Glaser and Strauss’ Grounded Theory (1967) constant comparison method was employed to analyze the collected data. Finally, the findings' results were audited by all participants of the research. All the participants and the researchers created 200 items of food culture products, 74 items of food culture practices, and 50 items of food culture perspectives from the action research journey through watching culinary documentaries. The journey did broaden students’ points of view on world food culture and enhance their capability on perspective construction for food culture. Four aspects of significant findings were demonstrated. First, learning food culture through watching Berlin culinary films helps students link themselves to the happening global food issues such as food security, food poverty, and food sovereignty, which direct them to rethink how people should grow, share and consume food. Second, watching different categories of documentary food films enhances students’ strong sense of responsibility for ensuring healthy lives and promoting well-being for all people in every corner of the world. Third, watching these documentary films encourages students to think if the culinary education they have accepted in this island is inclusive and the importance of quality education, which can promote lifelong learning. Last but not least, the journey of the culinary documentary film watching in the 'Food Culture' course inspires students to take pride in their profession. It is hoped the model of teaching food culture with culinary documentary films will inspire more food culture educators, researchers, and the culinary curriculum designers.Keywords: food culture, action research, culinary documentary films, food culture products, practices, perspectives
Procedia PDF Downloads 110108 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 330