Search results for: iterative hard thresholding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1468

Search results for: iterative hard thresholding

328 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices

Authors: P. Caimmi, E. Bele, A. Abolfathi

Abstract:

Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.

Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis

Procedia PDF Downloads 99
327 Scheduling Building Projects: The Chronographical Modeling Concept

Authors: Adel Francis

Abstract:

Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.

Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling

Procedia PDF Downloads 128
326 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 111
325 Defence Industry in the Political Economy of State and Business Relations

Authors: Hatice Idil Gorgen

Abstract:

Turkey has been investing in its national defence industrial base since the 1980s. State’s role in defence industry showed differences in Turkey. Parallel with this, ruling group’s attitude toward companies in defence sector varied. These changes in policies and behaviors of the state have occurred throughout such milestones as political and economic turmoil in domestic and international level. Hence, it is argued that state’s role, relations with private companies in defense sector and its policies towards the defense industry has shown differences due to the international system, political institutions, ideas and political coalitions in Turkey since the 1980s. Therefore, in order to see changes in the role of the state in defence sector, this paper aims to indicate first, history of state’s role in production and defence industry in the post-1980s era. Secondly, to comprehend the changes in the state’s role in defence industry, Stephan Haggard’s sources of policy change will be provided in the theoretical ground. Thirdly, state cooperated, and joint venture defence firms, state’s actions toward them will be observed. The remaining part will explore the underlying reasons for the changes in the role of the state in defence industry, and it implicitly or explicitly impacts on state business relations. Major findings illustrate that targeted idea of self-sufficient or autarky Turkey to attract domestic audience and to raise the prestige through defence system; ruling elites can regard defence industry and involved business groups as a mean for their ends. State dominant value, sensitive perception which has been ever since Ottoman Empire, prioritizes business groups in defence industry compared to others and push the ruling elites to pursue hard power in defence sectors. Through the globally structural transformation in defence industry, integration of Turkey to liberal bloc deepened and widened interdependence among states. Although it is a qualitative study, it involves the numerated data and descriptive statistics. Data will be collected by searching secondary sources from the literature, examining official documents of ministry of defence, and other appropriate ministries.

Keywords: defense industry, state and business relations, public private relations, arm industry

Procedia PDF Downloads 284
324 Forensic Entomology in Algeria

Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa

Abstract:

Forensic entomology is the use of insects and their arthropod relatives as silent witnesses to aid legal investigations by interpreting information concerning a death. The main purpose of forensic entomology is to establish the postmortem interval or PMI Postmortem interval is a matter of crucial importance in the investigations of homicide and other untimely deaths when the body found is after three days. Forensic entomology has grown immensely as a discipline in the past thirty years. In Algeria, forensic entomology was introduced in 2010 by the National Institute for Criminalistics and Criminology of the National Gendarmerie (NICC). However, all the work that has been done so far in this growing field in Algeria has been unknown at both the national and international levels. In this context, the aim of this paper is to describe the state of forensic entomology in Algeria. The Laboratory of Entomology of the NICC is the only one of its kind in Algeria. It started its activities in 2010, consisting of two specialists. The main missions of the laboratory are estimation of the PMI by the analysis of entomological evidence, and determination if the body was moved. Currently, the laboratory is performing different tasks such as the expert work required by investigators to estimate the PMI using the insects. The estimation is performed by the accumulated degree days method (ADD) in most of the cases except for those where the cadaver is in dry decay. To assure the quality of the entomological evidence, crime scene personnel are trained by the laboratory of Entomology of the NICC. Recently, undergraduate and graduate students have been studying carrion ecology and insect activity in different geographic locations of Algeria using rabbits and wild boar cadavers as animal models. The Laboratory of Entomology of the NICC has also been involved in some of these research projects. Entomotoxicology experiments are also conducted with the collaboration of the Toxicology Department of the NICC. By dint of hard work that has been performed by the Laboratory of Entomology of the NICC, official bodies have been adopting more and more the use of entomological evidence in criminal investigations in Algeria, which is commendable. It is important, therefore, that steps are taken to fill in the gaps in the knowledge necessary for entomological evidence to have a useful future in criminal investigations in Algeria.

Keywords: forensic entomology, corpse, insects, postmortem interval, expertise, Algeria

Procedia PDF Downloads 374
323 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 44
322 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity

Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher

Abstract:

In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.

Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication

Procedia PDF Downloads 19
321 Interrogation of the Role of First Year Student Experiences in Student Success at a University of Technology in South Africa

Authors: Livingstone Makondo

Abstract:

This ongoing research explores what could be the components of a comprehensive First-Year Student Experience (FYSE) at the Durban University of Technology (DUT) and the preferred implementation modalities. In light of the Siyaphumelela project, this interrogation is premised on the need to glean data for the institution that could be used to ascertain the role of FYSE towards enhancing student success. The research proceeds by examining prevalent models from other South African Universities and beyond in its quest to get at pragmatic comprehensive FYSE programme for DUT. As DUT is a student centered institution and amidst the ever shrinking economy, this research would aid higher education practitioners to ascertain if the hard earned finances are being channelled to a worthy academic venture. This research seeks to get inputs from a) students who participated in FYSE and are now in second and third years at DUT b) students who are currently participating in FYSE c) former and present Tutors d) departmental coordinators e) academics and support staff working with the participating students. This exploratory approach is preferred since 2010 DUT has grappled with how to implement an integrated institution-wide FYSE. This findings of this research could provide the much-needed data to ascertain if the current FYSE package is pivotal towards attainment of DUT Strategic Focus Area 1: Building sustainable student communities of living and learning. The ideal is to have DUT FYSE programme become an institution-wide programme that lays the foundation for consolidated and focused student development programmes for subsequent undergraduate and postgraduate levels of study. Also, armed with data from this research, DUT could develop the capacity and systems to ensure that all students get diverse on-time support to enhance their retention and academic success in their tertiary studies. In essence, the preferred FYSE curriculum woven around DUT graduate attributes should contribute towards the reduction in the first-year students’ dropout rates and subsequently in undergraduate studies. Therefore, this on-going research will feed into Siyaphumelela project and would help position 2018-2020 FYSE initiatives at DUT.

Keywords: challenges, comprehensive, dropout, transition

Procedia PDF Downloads 132
320 Effect of Plant Density and Planting Pattern on Yield and Quality of Single Cross 704 Silage Corn (Zea mays L.) in Isfahan

Authors: Seyed Mohammad Ali Zahedi

Abstract:

This field experiment was conducted in Isfahan in 2011 in order to study the effect of plant density and planting pattern on growth, yield and quality of silage corn (SC 704) using a randomized complete block design with split plot layout and four replications. The main plot consisted of three planting patterns (60 and 75 cm single planting row and 75 cm double planting row referred to as 60S, 75S and 75T, respectively). The subplots consisted of four levels of plant densities (65000, 80000, 95000 and 110000 plants per hectare). Each subplot consisted of 7 rows, each with 10m length. Vegetative and reproductive characteristics of plants at silking and hard dough stages (when the plants were harvested for silage) were evaluated. Results of variance analysis showed that the effects of planting pattern and plant density were significant on leaf area per plant, leaf area index (at silking), plant height, stem diameter, dry weights of leaf, stem and ear in silking and harvest stages and on fresh and dry yield, dry matter percentage and crude protein percentage at harvest. There was no planting pattern × plant density interaction for these parameters. As row space increased from 60 cm with single planting to 75 cm with single planting, leaf area index and plant height increased, but leaf area per plant, stem diameter, dry weight of leaf, stem and ear, dry matter percentage, dry matter yield and crude protein percentage decreased. Dry matter yield reduced from 24.9 to 18.5 t/ha and crude protein percentage decreased from 6.11 to 5.60 percent. When the plant density increased from 65000 to 110000 plant per hectare, leaf area index, plant height, dry weight of leaf, stem and ear and dry matter yield increased from 19.2 to 23.3 t/ha, whereas leaf area per plant, stem diameter, dry matter percentage and crude protein percentage decreased from 6.30 to 5.25. The best results were obtained with 60 cm row distance with single planting and 110000 plants per hectare.

Keywords: silage corn, plant density, planting pattern, yield

Procedia PDF Downloads 308
319 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene

Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald

Abstract:

In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.

Keywords: water and sanitation, emergency response, education and training, building resilience

Procedia PDF Downloads 283
318 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 320
317 Application of Recycled Tungsten Carbide Powder for Fabrication of Iron Based Powder Metallurgy Alloy

Authors: Yukinori Taniguchi, Kazuyoshi Kurita, Kohei Mizuta, Keigo Nishitani, Ryuichi Fukuda

Abstract:

Tungsten carbide is widely used as a tool material in metal manufacturing process. Since tungsten is typical rare metal, establishment of recycle process of tungsten carbide tools and restore into cemented carbide material bring great impact to metal manufacturing industry. Recently, recycle process of tungsten carbide has been developed and established gradually. However, the demands for quality of cemented carbide tool are quite severe because hardness, toughness, anti-wear ability, heat resistance, fatigue strength and so on should be guaranteed for precision machining and tool life. Currently, it is hard to restore the recycled tungsten carbide powder entirely as raw material for new processed cemented carbide tool. In this study, to suggest positive use of recycled tungsten carbide powder, we have tried to fabricate a carbon based sintered steel which shows reinforced mechanical properties with recycled tungsten carbide powder. We have made set of newly designed sintered steels. Compression test of sintered specimen in density ratio of 0.85 (which means 15% porosity inside) has been conducted. As results, at least 1.7 times higher in nominal strength in the amount of 7.0 wt.% was shown in recycled WC powder. The strength reached to over 600 MPa for the Fe-WC-Co-Cu sintered alloy. Wear test has been conducted by using ball-on-disk type friction tester using 5 mm diameter ball with normal force of 2 N in the dry conditions. Wear amount after 1,000 m running distance shows that about 1.5 times longer life was shown in designed sintered alloy. Since results of tensile test showed that same tendency in previous testing, it is concluded that designed sintered alloy can be used for several mechanical parts with special strength and anti-wear ability in relatively low cost due to recycled tungsten carbide powder.

Keywords: tungsten carbide, recycle process, compression test, powder metallurgy, anti-wear ability

Procedia PDF Downloads 224
316 Understanding the Productivity Effect on Industrial Management: The Portuguese Wood Furniture Industry Case Study

Authors: Jonas A. R. H. Lima, Maria Antonia Carravilla

Abstract:

As productivity concepts are widely related to industrial savings, it is becoming particularly important in a more and more competitive world, to really understand how productivity can be well used in industrial management techniques. Nowadays, consumers are no more willing to pay for mistakes and inefficiencies. Therefore, one way for companies to stay competitive is to control and increase their productivity. This study aims to define clearly the productivity concept, understand how a company can affect productivity, and, if possible, identify the relation between each identified productivity factor. This will help managers, by clarifying the main issues behind productivity concepts and proposing a methodology to measure, control and increase productivity. The main questions to be answered are: what is the importance of productivity for the Portuguese Wood Furniture Industry? Is it possible to control productivity internally, or is it a phenomenon external to companies, hard or even impossible to control? How to understand, control and adjust productivity performance? How to make productivity to become one main asset for maximizing the use of the available resources? This essay will follow a constructive approach mostly based in the research hypothesis mentioned above. For that, a literature review is being done to find the main conceptual frameworks and empirical studies that already exist, and by doing so, highlight eventual knowledge or conflicting research to be addressed in this work. We expect to build theoretical explanations and test theoretical predictions from participants understandings and own experiences, by elaborating field surveys and interviews, to select adjusted productivity indicators and analyze the productivity evolution according the adjustments on other variables. Its intended the conduction of an exploratory work that can simultaneous clarify productivity concepts, objectives, and define frameworks. This investigation intends to migrate from merely academic concepts to a daily basis operational reality of the companies from the Portuguese Wood Furniture Industry highlighting productivity increased importance within modern engineering and industrial management. The ambition is to clarify, systemize and develop a management tool that may not only control but positively influence the way resources are used.

Keywords: industrial management, motivation, productivity, performance indicators, reward management, wood furniture industry

Procedia PDF Downloads 201
315 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 232
314 Using Multiomic Plasma Profiling From Liquid Biopsies to Identify Potential Signatures for Disease Diagnostics in Late-Stage Non-small Cell Lung Cancer (NSCLC) in Trinidad and Tobago

Authors: Nicole Ramlachan, Samuel Mark West

Abstract:

Lung cancer is the leading cause of cancer-associated deaths in North America, with the vast majority being non-small cell lung cancer (NSCLC), with a five-year survival rate of only 24%. Non-invasive discovery of biomarkers associated with early-diagnosis of NSCLC can enable precision oncology efforts using liquid biopsy-based multiomics profiling of plasma. Although tissue biopsies are currently the gold standard for tumor profiling, this method presents many limitations since these are invasive, risky, and sometimes hard to obtain as well as only giving a limited tumor profile. Blood-based tests provides a less-invasive, more robust approach to interrogate both tumor- and non-tumor-derived signals. We intend to examine 30 stage III-IV NSCLC patients pre-surgery and collect plasma samples.Cell-free DNA (cfDNA) will be extracted from plasma, and next-generation sequencing (NGS) performed. Through the analysis of tumor-specific alterations, including single nucleotide variants (SNVs), insertions, deletions, copy number variations (CNVs), and methylation alterations, we intend to identify tumor-derived DNA—ctDNA among the total pool of cfDNA. This would generate data to be used as an accurate form of cancer genotyping for diagnostic purposes. Using liquid biopsies offer opportunities to improve the surveillance of cancer patients during treatment and would supplement current diagnosis and tumor profiling strategies previously not readily available in Trinidad and Tobago. It would be useful and advantageous to use this in diagnosis and tumour profiling as well as to monitor cancer patients, providing early information regarding disease evolution and treatment efficacy, and reorient treatment strategies in, timethereby improving clinical oncology outcomes.

Keywords: genomics, multiomics, clinical genetics, genotyping, oncology, diagnostics

Procedia PDF Downloads 116
313 The Integration of Geographical Information Systems and Capacitated Vehicle Routing Problem with Simulated Demand for Humanitarian Logistics in Tsunami-Prone Area: A Case Study of Phuket, Thailand

Authors: Kiatkulchai Jitt-Aer, Graham Wall, Dylan Jones

Abstract:

As a result of the Indian Ocean tsunami in 2004, logistics applied to disaster relief operations has received great attention in the humanitarian sector. As learned from such disaster, preparing and responding to the aspect of delivering essential items from distribution centres to affected locations are of the importance for relief operations as the nature of disasters is uncertain especially in suffering figures, which are normally proportional to quantity of supplies. Thus, this study proposes a spatial decision support system (SDSS) for humanitarian logistics by integrating Geographical Information Systems (GIS) and the capacitated vehicle routing problem (CVRP). The GIS is utilised for acquiring demands simulated from the tsunami flooding model of the affected area in the first stage, and visualising the simulation solutions in the last stage. While CVRP in this study encompasses designing the relief routes of a set of homogeneous vehicles from a relief centre to a set of geographically distributed evacuation points in which their demands are estimated by using both simulation and randomisation techniques. The CVRP is modeled as a multi-objective optimization problem where both total travelling distance and total transport resources used are minimized, while demand-cost efficiency of each route is maximized in order to determine route priority. As the model is a NP-hard combinatorial optimization problem, the Clarke and Wright Saving heuristics is proposed to solve the problem for the near-optimal solutions. The real-case instances in the coastal area of Phuket, Thailand are studied to perform the SDSS that allows a decision maker to visually analyse the simulation scenarios through different decision factors.

Keywords: demand simulation, humanitarian logistics, geographical information systems, relief operations, capacitated vehicle routing problem

Procedia PDF Downloads 223
312 Self-Assembled Laser-Activated Plasmonic Substrates for High-Throughput, High-Efficiency Intracellular Delivery

Authors: Marinna Madrid, Nabiha Saklayen, Marinus Huber, Nicolas Vogel, Christos Boutopoulos, Michel Meunier, Eric Mazur

Abstract:

Delivering material into cells is important for a diverse range of biological applications, including gene therapy, cellular engineering and imaging. We present a plasmonic substrate for delivering membrane-impermeable material into cells at high throughput and high efficiency while maintaining cell viability. The substrate fabrication is based on an affordable and fast colloidal self-assembly process. When illuminated with a femtosecond laser, the light interacts with the electrons at the surface of the metal substrate, creating localized surface plasmons that form bubbles via energy dissipation in the surrounding medium. These bubbles come into close contact with the cell membrane to form transient pores and enable entry of membrane-impermeable material via diffusion. We use fluorescence microscopy and flow cytometry to verify delivery of membrane-impermeable material into HeLa CCL-2 cells. We show delivery efficiency and cell viability data for a range of membrane-impermeable cargo, including dyes and biologically relevant material such as siRNA. We estimate the effective pore size by determining delivery efficiency for hard fluorescent spheres with diameters ranging from 20 nm to 2 um. To provide insight to the cell poration mechanism, we relate the poration data to pump-probe measurements of micro- and nano-bubble formation on the plasmonic substrate. Finally, we investigate substrate stability and reusability by using scanning electron microscopy (SEM) to inspect for damage on the substrate after laser treatment. SEM images show no visible damage. Our findings indicate that self-assembled plasmonic substrates are an affordable tool for high-throughput, high-efficiency delivery of material into mammalian cells.

Keywords: femtosecond laser, intracellular delivery, plasmonic, self-assembly

Procedia PDF Downloads 506
311 Searching for the ‘Why’ of Gendered News: Journalism Practices and Societal Contexts

Authors: R. Simões, M. Silveirinha

Abstract:

Driven by the need to understand the results of previous research that clearly shows deep unbalances of the media discourses about women and men in spite of the growing numbers of female journalists, our paper aims to progress from the 'what' to the 'why' of these unbalanced representations. Furthermore, it does so at a time when journalism is undergoing a dramatic change in terms of professional practices and in how media organizations are organized and run, affecting women in particular. While some feminist research points to the fact that female and male journalists evaluate the role of the news and production methods in similar ways feminist theorizing also suggests that thought and knowledge are highly influenced by social identity, which is also inherently affected by the experiences of gender. This is particularly important at a time of deep societal and professional changes. While there are persuasive discussions of gender identities at work in newsrooms in various countries studies on the issue will benefit from cases that focus on the particularities of local contexts. In our paper, we present one such case: the case of Portugal, a country hit hard by austerity measures that have affected all cultural industries including journalism organizations, already feeling the broader impacts of the larger societal changes of the media landscape. Can we gender these changes? How are they felt and understood by female and male journalists? And how are these discourses framed by androcentric, feminist and post-feminist sensibilities? Foregrounding questions of gender, our paper seeks to explore some of the interactions of societal and professional forces, identifying their gendered character and outlining how they shape journalism work in general and the production of unbalanced gender representations in particular. We do so grounded in feminist studies of journalism as well as feminist organizational and work studies, looking at a corpus of 20 in-depth interviews of female and male Portuguese journalists. The research findings illustrate how gender in journalism practices interacts with broader experiences of the cultural and economic contexts and show the ambivalences of these interactions in news organizations.

Keywords: gender, journalism, newsroom culture, Portuguese journalists

Procedia PDF Downloads 369
310 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 86
309 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor

Authors: Manish Chand, Subhrojit Bagchi, R. Kumar

Abstract:

A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.

Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code

Procedia PDF Downloads 130
308 Field Study of Chlorinated Aliphatic Hydrocarbons Degradation in Contaminated Groundwater via Micron Zero-Valent Iron Coupled with Biostimulation

Authors: Naijin Wu, Peizhong Li, Haijian Wang, Wenxia Wei, Yun Song

Abstract:

Chlorinated aliphatic hydrocarbons (CAHs) pollution poses a severe threat to human health and is persistent in groundwater. Although chemical reduction or bioremediation is effective, it is still hard to achieve their complete and rapid dechlorination. Recently, the combination of zero-valent iron and biostimulation has been considered to be one of the most promising strategies, but field studies of this technology are scarce. In a typical site contaminated by various types of CAHs, basic physicochemical parameters of groundwater, CAHs and their product concentrations, and microbial abundance and diversity were monitored after a remediation slurry containing both micron zero-valent iron (mZVI) and biostimulation components were directly injected into the aquifer. Results showed that groundwater could form and keep low oxidation-reduction potential (ORP), a neutral pH, and anoxic conditions after different degrees of fluctuations, which was benefit for the reductive dechlorination of CAHs. The injection also caused an obvious increase in the total organic carbon (TOC) concentration and sulfate reduction. After 253 days post-injection, the mean concentration of total chlorinated ethylene (CEE) from two monitoring wells decreased from 304 μg/L to 8 μg/L, and total chlorinated ethane (CEA) decreased from 548 μg/L to 108 μg/L. Occurrence of chloroethane (CA) suggested that hydrogenolysis dechlorination was one of the main degradation pathways for CEA, and also hints that biological dechlorination was activated. A significant increase of ethylene at day 67 post-injection indicated that dechlorination was complete. Additionally, the total bacterial counts increased by 2-3 orders of magnitude after 253 days post-injection. And the microbial species richness decreased and gradually changed to anaerobic/fermentative bacteria. The relative abundance of potential degradation bacteria increased corresponding to the degradation of CAHs. This work demonstrates that mZVI and biostimulation can be combined to achieve the efficient removal of various CAHs from contaminated groundwater sources.

Keywords: chlorinated aliphatic hydrocarbons, groundwater, field study, zero-valent iron, biostimulation

Procedia PDF Downloads 137
307 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 155
306 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 124
305 On Cloud Computing: A Review of the Features

Authors: Assem Abdel Hamed Mousa

Abstract:

The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.

Keywords: internet, cloud computing, ubiquitous computing, big data

Procedia PDF Downloads 360
304 Documenting the 15th Century Prints with RTI

Authors: Peter Fornaro, Lothar Schmitt

Abstract:

The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.

Keywords: art history, computational photography, paste prints, reflectance transformation imaging

Procedia PDF Downloads 256
303 Teaching Non-Euclidean Geometries to Learn Euclidean One: An Experimental Study

Authors: Silvia Benvenuti, Alessandra Cardinali

Abstract:

In recent years, for instance, in relation to the Covid 19 pandemic and the evidence of climate change, it is becoming quite clear that the development of a young kid into an adult citizen requires a solid scientific background. Citizens are required to exert logical thinking and know the methods of science in order to adapt, understand, and develop as persons. Mathematics sits at the core of these required skills: learning the axiomatic method is fundamental to understand how hard sciences work and helps in consolidating logical thinking, which will be useful for the entire life of a student. At the same time, research shows that the axiomatic study of geometry is a problematic topic for students, even for those with interest in mathematics. With this in mind, the main goals of the research work we will describe are: (1) to show whether non-Euclidean geometries can be a tool to allow students to consolidate the knowledge of Euclidean geometries by developing it in a critical way; (2) to promote the understanding of the modern axiomatic method in geometry; (3) to give students a new perspective on mathematics so that they can see it as a creative activity and a widely discussed topic with a historical background. One of the main issues related to the state-of-the-art in this topic is the shortage of experimental studies with students. For this reason, our aim is to show further experimental evidence of the potential benefits of teaching non-Euclidean geometries at high school, based on data collected from a study started in 2005 in the frame of the Italian National Piano Lauree Scientifiche, continued by a teacher training organized in September 2018, perfected in a pilot study that involved 77 high school students during the school years 2018-2019 and 2019-2020. and finally implemented through an experimental study conducted in 2020-21 with 87 high school students. Our study shows that there is potential for further research to challenge current conceptions of the school mathematics curriculum and of the capabilities of high school mathematics students.

Keywords: Non-Euclidean geometries, beliefs about mathematics, questionnaires, modern axiomatic method

Procedia PDF Downloads 50
302 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 118
301 A Case Study on the Development and Application of Media Literacy Education Program Based on Circular Learning

Authors: Kim Hyekyoung, Au Yunkyung

Abstract:

As media plays an increasingly important role in our lives, the age at which media usage begins is getting younger worldwide. Particularly, young children are exposed to media at an early age, making early childhood media literacy education an essential task. However, most existing early childhood media literacy education programs focus solely on teaching children how to use media, and practical implementation and application are challenging. Therefore, this study aims to develop a play-based early childhood media literacy education program utilizing topic-based media content and explore the potential application and impact of this program on young children's media literacy learning. Based on theoretical and literature review on media literacy education, analysis of existing educational programs, and a survey on the current status and teacher perceptions of media literacy education for preschool children, this study developed a media literacy education program for preschool children, considering the components of media literacy (understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication). To verify the effectiveness of the program, 20 preschool children aged 5 from C City M Kindergarten were chosen as participants, and the program was implemented from March 28th to July 4th, 2022, once a week for a total of 7 sessions. The program was developed based on Gallenstain's (2003) iterative learning model (participation-exploration-explanation-extension-evaluation). To explore the quantitative changes before and after the program, a repeated measures analysis of variance was conducted, and qualitative analysis was employed to examine the observed process changes. It was found that after the application of the education program, media literacy levels such as understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication significantly improved. The recursive learning-based early childhood media literacy education program developed in this study can be effectively applied to young children's media literacy education and help enhance their media literacy levels. In terms of observed process changes, it was confirmed that children learned about various topics, expressed their thoughts, and improved their ability to communicate with others using media content. These findings emphasize the importance of developing and implementing media literacy education programs and can contribute to empowering young children to safely and effectively utilize media in their media environment. The results of this study, exploring the potential application and impact of the recursive learning-based early childhood media literacy education program on young children's media literacy learning, demonstrated positive changes in young children's media literacy levels. These results go beyond teaching children how to use media and can help foster their ability to safely and effectively utilize media in their media environment. Additionally, to enhance young children's media literacy levels and create a safe media environment, diverse content and methodologies are needed, and the continuous development and evaluation of education programs should be conducted.

Keywords: young children, media literacy, recursive learning, education program

Procedia PDF Downloads 49
300 Distribution Patterns of the Renieramycin-M-Producing Blue Sponge, Xestospongia sp. (De Laubenfels, 1932) (Phylum: Porifera, Class: Demospongiae) in Puerto Galera, Oriental Mindoro, Philippines

Authors: Geminne Manzano, Clairecynth Yu, Lilibeth Salvador-Reyes, Viviene Santiago, Porfirio AliñO

Abstract:

The distribution and abundance patterns of many marine sessile organisms such as sponges vary among and within reefs. Determining the factors affecting its distribution is essential especially for organisms that produce secondary metabolites with pharmaceutical importance. In this study, the small-scale distribution patterns of the Philippine blue sponge, Xestospongia sp. in relation to some ecological factors were examined. The relationship between the renieramycin-M production and their benthic attributes were also determined. Ecological surveys were conducted on two stations with varying depth and exposure located in Oriental Mindoro, Philippines. Three 30 by 6m belt transect were used to assess the sponge abundance at each station. The substratum of the sponges was also characterized. Fish visual census observations were also taken together with the photo transect methods benthic surveys. Sponge samples were also collected for the extraction of Renieramycin-M and for further chemical analysis. Varying distribution patterns were observed to be attributed to the combination of different ecological and environmental factors. The amount of Renieramycin-production also varied in each station. The common substratum for blue sponges includes hard and soft corals, as well as, dead coral with algal patches. Blue sponges from exposed habitat frequently grow associated with massive and branching corals, Porites sp., while the most frequent substrate found on sheltered habitats is the coral Pavona sp. Exploring the influence of ecological and environmental parameters on the abundance and distribution of sponge assemblages provide ecological insights and their potential applications to pharmaceutical studies. The results of this study provide further impetus in pursuing studies into patterns and processes of the Philippine blue sponge, Xestospongia sp. distribution in relation to the chemical ecology of its secondary metabolites.

Keywords: distribution patterns, Porifera, Renieramycin-M, sponge assemblages, Xestospongia sp.

Procedia PDF Downloads 246
299 Mechanical and Tribological Performances of (Nb: H-D: a-C) Thin Films for Biomedical Applications

Authors: Sara Khamseh, Kambiz Javanruee, Hamid Khorsand

Abstract:

Plenty of metallic materials are used for biomedical applications like hip joints and screws. Besides, it is reported that metal platforms such as stainless steel show significant deterioration because of wear and friction. The surface of metal substrates has been coated with a variety of multicomponent coatings to prevail these problems. The carbon-based multicomponent coatings such as metal-added amorphous carbon and diamond coatings are crucially important because of their remarkable tribological performance and chemical stability. In the current study, H-D contained Nb: (a-C) multicomponent coatings (H-D: hexagonal diamond, a-C: amorphous carbon) coated on A 304 steel substrates using an unbalanced magnetron (UBM) sputtering system. The effects of Nb and H-D content and ID/IG ratio on microstructure, mechanical and tribological characteristics of (Nb: H-D: a-C) composite coatings were investigated. The results of Raman spectroscopy represented that a-C phase with a Graphite-like structure (GLC with high value of sp2 carbon bonding) is formed, and its domain size increased with increasing Nb content of the coatings. Moreover, the Nb played a catalyst for the formation of the H-D phase. The nanoindentation hardness value of the coatings ranged between ~17 to ~35 GPa and (Nb: H-D: a-C) composite coatings with more H-D content represented higher hardness and plasticity index. It seems that the existence of extra-hard H-D particles straightly increased hardness. The tribological performance of the coatings was evaluated using the pin-on-disc method under the wet environment of SBF (Simulated Body Fluid). The COF value of the (Nb: H-D: a-C) coatings decreased with an increasing ID/IG ratio. The lower coefficient of friction is a result of the lamelliform array of graphitic domains. Also, the wear rate of the coatings decreased with increasing H-D content of the coatings. Based on the literature, a-C coatings with high hardness and H3/E2 ratio represent lower wear rates and better tribological performance. According to the nanoindentation analysis, hardness and H3/E2 ratio of (Nb: H-D: a-C) multicomponent coatings increased with increasing H-D content, which in turn decreased the wear rate of the coatings. The mechanical and tribological potency of (Nb: H-D: a-C) composite coatings on A 304 steel substrates paved the way for the development of innovative advanced coatings to ameliorate the performance of A 304 steel for biomedical applications.

Keywords: COF, mechanical properties, (Nb: H-D: a-C) coatings, wear rate

Procedia PDF Downloads 68