Mercurial > hg > CbC > CbC_gcc
comparison gcc/reorg.c @ 0:a06113de4d67
first commit
author | kent <kent@cr.ie.u-ryukyu.ac.jp> |
---|---|
date | Fri, 17 Jul 2009 14:47:48 +0900 |
parents | |
children | 77e2b8dfacca |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:a06113de4d67 |
---|---|
1 /* Perform instruction reorganizations for delay slot filling. | |
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, | |
3 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 | |
4 Free Software Foundation, Inc. | |
5 Contributed by Richard Kenner (kenner@vlsi1.ultra.nyu.edu). | |
6 Hacked by Michael Tiemann (tiemann@cygnus.com). | |
7 | |
8 This file is part of GCC. | |
9 | |
10 GCC is free software; you can redistribute it and/or modify it under | |
11 the terms of the GNU General Public License as published by the Free | |
12 Software Foundation; either version 3, or (at your option) any later | |
13 version. | |
14 | |
15 GCC is distributed in the hope that it will be useful, but WITHOUT ANY | |
16 WARRANTY; without even the implied warranty of MERCHANTABILITY or | |
17 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License | |
18 for more details. | |
19 | |
20 You should have received a copy of the GNU General Public License | |
21 along with GCC; see the file COPYING3. If not see | |
22 <http://www.gnu.org/licenses/>. */ | |
23 | |
24 /* Instruction reorganization pass. | |
25 | |
26 This pass runs after register allocation and final jump | |
27 optimization. It should be the last pass to run before peephole. | |
28 It serves primarily to fill delay slots of insns, typically branch | |
29 and call insns. Other insns typically involve more complicated | |
30 interactions of data dependencies and resource constraints, and | |
31 are better handled by scheduling before register allocation (by the | |
32 function `schedule_insns'). | |
33 | |
34 The Branch Penalty is the number of extra cycles that are needed to | |
35 execute a branch insn. On an ideal machine, branches take a single | |
36 cycle, and the Branch Penalty is 0. Several RISC machines approach | |
37 branch delays differently: | |
38 | |
39 The MIPS has a single branch delay slot. Most insns | |
40 (except other branches) can be used to fill this slot. When the | |
41 slot is filled, two insns execute in two cycles, reducing the | |
42 branch penalty to zero. | |
43 | |
44 The SPARC always has a branch delay slot, but its effects can be | |
45 annulled when the branch is not taken. This means that failing to | |
46 find other sources of insns, we can hoist an insn from the branch | |
47 target that would only be safe to execute knowing that the branch | |
48 is taken. | |
49 | |
50 The HP-PA always has a branch delay slot. For unconditional branches | |
51 its effects can be annulled when the branch is taken. The effects | |
52 of the delay slot in a conditional branch can be nullified for forward | |
53 taken branches, or for untaken backward branches. This means | |
54 we can hoist insns from the fall-through path for forward branches or | |
55 steal insns from the target of backward branches. | |
56 | |
57 The TMS320C3x and C4x have three branch delay slots. When the three | |
58 slots are filled, the branch penalty is zero. Most insns can fill the | |
59 delay slots except jump insns. | |
60 | |
61 Three techniques for filling delay slots have been implemented so far: | |
62 | |
63 (1) `fill_simple_delay_slots' is the simplest, most efficient way | |
64 to fill delay slots. This pass first looks for insns which come | |
65 from before the branch and which are safe to execute after the | |
66 branch. Then it searches after the insn requiring delay slots or, | |
67 in the case of a branch, for insns that are after the point at | |
68 which the branch merges into the fallthrough code, if such a point | |
69 exists. When such insns are found, the branch penalty decreases | |
70 and no code expansion takes place. | |
71 | |
72 (2) `fill_eager_delay_slots' is more complicated: it is used for | |
73 scheduling conditional jumps, or for scheduling jumps which cannot | |
74 be filled using (1). A machine need not have annulled jumps to use | |
75 this strategy, but it helps (by keeping more options open). | |
76 `fill_eager_delay_slots' tries to guess the direction the branch | |
77 will go; if it guesses right 100% of the time, it can reduce the | |
78 branch penalty as much as `fill_simple_delay_slots' does. If it | |
79 guesses wrong 100% of the time, it might as well schedule nops. When | |
80 `fill_eager_delay_slots' takes insns from the fall-through path of | |
81 the jump, usually there is no code expansion; when it takes insns | |
82 from the branch target, there is code expansion if it is not the | |
83 only way to reach that target. | |
84 | |
85 (3) `relax_delay_slots' uses a set of rules to simplify code that | |
86 has been reorganized by (1) and (2). It finds cases where | |
87 conditional test can be eliminated, jumps can be threaded, extra | |
88 insns can be eliminated, etc. It is the job of (1) and (2) to do a | |
89 good job of scheduling locally; `relax_delay_slots' takes care of | |
90 making the various individual schedules work well together. It is | |
91 especially tuned to handle the control flow interactions of branch | |
92 insns. It does nothing for insns with delay slots that do not | |
93 branch. | |
94 | |
95 On machines that use CC0, we are very conservative. We will not make | |
96 a copy of an insn involving CC0 since we want to maintain a 1-1 | |
97 correspondence between the insn that sets and uses CC0. The insns are | |
98 allowed to be separated by placing an insn that sets CC0 (but not an insn | |
99 that uses CC0; we could do this, but it doesn't seem worthwhile) in a | |
100 delay slot. In that case, we point each insn at the other with REG_CC_USER | |
101 and REG_CC_SETTER notes. Note that these restrictions affect very few | |
102 machines because most RISC machines with delay slots will not use CC0 | |
103 (the RT is the only known exception at this point). | |
104 | |
105 Not yet implemented: | |
106 | |
107 The Acorn Risc Machine can conditionally execute most insns, so | |
108 it is profitable to move single insns into a position to execute | |
109 based on the condition code of the previous insn. | |
110 | |
111 The HP-PA can conditionally nullify insns, providing a similar | |
112 effect to the ARM, differing mostly in which insn is "in charge". */ | |
113 | |
114 #include "config.h" | |
115 #include "system.h" | |
116 #include "coretypes.h" | |
117 #include "tm.h" | |
118 #include "toplev.h" | |
119 #include "rtl.h" | |
120 #include "tm_p.h" | |
121 #include "expr.h" | |
122 #include "function.h" | |
123 #include "insn-config.h" | |
124 #include "conditions.h" | |
125 #include "hard-reg-set.h" | |
126 #include "basic-block.h" | |
127 #include "regs.h" | |
128 #include "recog.h" | |
129 #include "flags.h" | |
130 #include "output.h" | |
131 #include "obstack.h" | |
132 #include "insn-attr.h" | |
133 #include "resource.h" | |
134 #include "except.h" | |
135 #include "params.h" | |
136 #include "timevar.h" | |
137 #include "target.h" | |
138 #include "tree-pass.h" | |
139 | |
140 #ifdef DELAY_SLOTS | |
141 | |
142 #ifndef ANNUL_IFTRUE_SLOTS | |
143 #define eligible_for_annul_true(INSN, SLOTS, TRIAL, FLAGS) 0 | |
144 #endif | |
145 #ifndef ANNUL_IFFALSE_SLOTS | |
146 #define eligible_for_annul_false(INSN, SLOTS, TRIAL, FLAGS) 0 | |
147 #endif | |
148 | |
149 /* Insns which have delay slots that have not yet been filled. */ | |
150 | |
151 static struct obstack unfilled_slots_obstack; | |
152 static rtx *unfilled_firstobj; | |
153 | |
154 /* Define macros to refer to the first and last slot containing unfilled | |
155 insns. These are used because the list may move and its address | |
156 should be recomputed at each use. */ | |
157 | |
158 #define unfilled_slots_base \ | |
159 ((rtx *) obstack_base (&unfilled_slots_obstack)) | |
160 | |
161 #define unfilled_slots_next \ | |
162 ((rtx *) obstack_next_free (&unfilled_slots_obstack)) | |
163 | |
164 /* Points to the label before the end of the function. */ | |
165 static rtx end_of_function_label; | |
166 | |
167 /* Mapping between INSN_UID's and position in the code since INSN_UID's do | |
168 not always monotonically increase. */ | |
169 static int *uid_to_ruid; | |
170 | |
171 /* Highest valid index in `uid_to_ruid'. */ | |
172 static int max_uid; | |
173 | |
174 static int stop_search_p (rtx, int); | |
175 static int resource_conflicts_p (struct resources *, struct resources *); | |
176 static int insn_references_resource_p (rtx, struct resources *, int); | |
177 static int insn_sets_resource_p (rtx, struct resources *, int); | |
178 static rtx find_end_label (void); | |
179 static rtx emit_delay_sequence (rtx, rtx, int); | |
180 static rtx add_to_delay_list (rtx, rtx); | |
181 static rtx delete_from_delay_slot (rtx); | |
182 static void delete_scheduled_jump (rtx); | |
183 static void note_delay_statistics (int, int); | |
184 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) | |
185 static rtx optimize_skip (rtx); | |
186 #endif | |
187 static int get_jump_flags (rtx, rtx); | |
188 static int rare_destination (rtx); | |
189 static int mostly_true_jump (rtx, rtx); | |
190 static rtx get_branch_condition (rtx, rtx); | |
191 static int condition_dominates_p (rtx, rtx); | |
192 static int redirect_with_delay_slots_safe_p (rtx, rtx, rtx); | |
193 static int redirect_with_delay_list_safe_p (rtx, rtx, rtx); | |
194 static int check_annul_list_true_false (int, rtx); | |
195 static rtx steal_delay_list_from_target (rtx, rtx, rtx, rtx, | |
196 struct resources *, | |
197 struct resources *, | |
198 struct resources *, | |
199 int, int *, int *, rtx *); | |
200 static rtx steal_delay_list_from_fallthrough (rtx, rtx, rtx, rtx, | |
201 struct resources *, | |
202 struct resources *, | |
203 struct resources *, | |
204 int, int *, int *); | |
205 static void try_merge_delay_insns (rtx, rtx); | |
206 static rtx redundant_insn (rtx, rtx, rtx); | |
207 static int own_thread_p (rtx, rtx, int); | |
208 static void update_block (rtx, rtx); | |
209 static int reorg_redirect_jump (rtx, rtx); | |
210 static void update_reg_dead_notes (rtx, rtx); | |
211 static void fix_reg_dead_note (rtx, rtx); | |
212 static void update_reg_unused_notes (rtx, rtx); | |
213 static void fill_simple_delay_slots (int); | |
214 static rtx fill_slots_from_thread (rtx, rtx, rtx, rtx, | |
215 int, int, int, int, | |
216 int *, rtx); | |
217 static void fill_eager_delay_slots (void); | |
218 static void relax_delay_slots (rtx); | |
219 #ifdef HAVE_return | |
220 static void make_return_insns (rtx); | |
221 #endif | |
222 | |
223 /* Return TRUE if this insn should stop the search for insn to fill delay | |
224 slots. LABELS_P indicates that labels should terminate the search. | |
225 In all cases, jumps terminate the search. */ | |
226 | |
227 static int | |
228 stop_search_p (rtx insn, int labels_p) | |
229 { | |
230 if (insn == 0) | |
231 return 1; | |
232 | |
233 /* If the insn can throw an exception that is caught within the function, | |
234 it may effectively perform a jump from the viewpoint of the function. | |
235 Therefore act like for a jump. */ | |
236 if (can_throw_internal (insn)) | |
237 return 1; | |
238 | |
239 switch (GET_CODE (insn)) | |
240 { | |
241 case NOTE: | |
242 case CALL_INSN: | |
243 return 0; | |
244 | |
245 case CODE_LABEL: | |
246 return labels_p; | |
247 | |
248 case JUMP_INSN: | |
249 case BARRIER: | |
250 return 1; | |
251 | |
252 case INSN: | |
253 /* OK unless it contains a delay slot or is an `asm' insn of some type. | |
254 We don't know anything about these. */ | |
255 return (GET_CODE (PATTERN (insn)) == SEQUENCE | |
256 || GET_CODE (PATTERN (insn)) == ASM_INPUT | |
257 || asm_noperands (PATTERN (insn)) >= 0); | |
258 | |
259 default: | |
260 gcc_unreachable (); | |
261 } | |
262 } | |
263 | |
264 /* Return TRUE if any resources are marked in both RES1 and RES2 or if either | |
265 resource set contains a volatile memory reference. Otherwise, return FALSE. */ | |
266 | |
267 static int | |
268 resource_conflicts_p (struct resources *res1, struct resources *res2) | |
269 { | |
270 if ((res1->cc && res2->cc) || (res1->memory && res2->memory) | |
271 || (res1->unch_memory && res2->unch_memory) | |
272 || res1->volatil || res2->volatil) | |
273 return 1; | |
274 | |
275 #ifdef HARD_REG_SET | |
276 return (res1->regs & res2->regs) != HARD_CONST (0); | |
277 #else | |
278 { | |
279 int i; | |
280 | |
281 for (i = 0; i < HARD_REG_SET_LONGS; i++) | |
282 if ((res1->regs[i] & res2->regs[i]) != 0) | |
283 return 1; | |
284 return 0; | |
285 } | |
286 #endif | |
287 } | |
288 | |
289 /* Return TRUE if any resource marked in RES, a `struct resources', is | |
290 referenced by INSN. If INCLUDE_DELAYED_EFFECTS is set, return if the called | |
291 routine is using those resources. | |
292 | |
293 We compute this by computing all the resources referenced by INSN and | |
294 seeing if this conflicts with RES. It might be faster to directly check | |
295 ourselves, and this is the way it used to work, but it means duplicating | |
296 a large block of complex code. */ | |
297 | |
298 static int | |
299 insn_references_resource_p (rtx insn, struct resources *res, | |
300 int include_delayed_effects) | |
301 { | |
302 struct resources insn_res; | |
303 | |
304 CLEAR_RESOURCE (&insn_res); | |
305 mark_referenced_resources (insn, &insn_res, include_delayed_effects); | |
306 return resource_conflicts_p (&insn_res, res); | |
307 } | |
308 | |
309 /* Return TRUE if INSN modifies resources that are marked in RES. | |
310 INCLUDE_DELAYED_EFFECTS is set if the actions of that routine should be | |
311 included. CC0 is only modified if it is explicitly set; see comments | |
312 in front of mark_set_resources for details. */ | |
313 | |
314 static int | |
315 insn_sets_resource_p (rtx insn, struct resources *res, | |
316 int include_delayed_effects) | |
317 { | |
318 struct resources insn_sets; | |
319 | |
320 CLEAR_RESOURCE (&insn_sets); | |
321 mark_set_resources (insn, &insn_sets, 0, include_delayed_effects); | |
322 return resource_conflicts_p (&insn_sets, res); | |
323 } | |
324 | |
325 /* Find a label at the end of the function or before a RETURN. If there | |
326 is none, try to make one. If that fails, returns 0. | |
327 | |
328 The property of such a label is that it is placed just before the | |
329 epilogue or a bare RETURN insn, so that another bare RETURN can be | |
330 turned into a jump to the label unconditionally. In particular, the | |
331 label cannot be placed before a RETURN insn with a filled delay slot. | |
332 | |
333 ??? There may be a problem with the current implementation. Suppose | |
334 we start with a bare RETURN insn and call find_end_label. It may set | |
335 end_of_function_label just before the RETURN. Suppose the machinery | |
336 is able to fill the delay slot of the RETURN insn afterwards. Then | |
337 end_of_function_label is no longer valid according to the property | |
338 described above and find_end_label will still return it unmodified. | |
339 Note that this is probably mitigated by the following observation: | |
340 once end_of_function_label is made, it is very likely the target of | |
341 a jump, so filling the delay slot of the RETURN will be much more | |
342 difficult. */ | |
343 | |
344 static rtx | |
345 find_end_label (void) | |
346 { | |
347 rtx insn; | |
348 | |
349 /* If we found one previously, return it. */ | |
350 if (end_of_function_label) | |
351 return end_of_function_label; | |
352 | |
353 /* Otherwise, see if there is a label at the end of the function. If there | |
354 is, it must be that RETURN insns aren't needed, so that is our return | |
355 label and we don't have to do anything else. */ | |
356 | |
357 insn = get_last_insn (); | |
358 while (NOTE_P (insn) | |
359 || (NONJUMP_INSN_P (insn) | |
360 && (GET_CODE (PATTERN (insn)) == USE | |
361 || GET_CODE (PATTERN (insn)) == CLOBBER))) | |
362 insn = PREV_INSN (insn); | |
363 | |
364 /* When a target threads its epilogue we might already have a | |
365 suitable return insn. If so put a label before it for the | |
366 end_of_function_label. */ | |
367 if (BARRIER_P (insn) | |
368 && JUMP_P (PREV_INSN (insn)) | |
369 && GET_CODE (PATTERN (PREV_INSN (insn))) == RETURN) | |
370 { | |
371 rtx temp = PREV_INSN (PREV_INSN (insn)); | |
372 end_of_function_label = gen_label_rtx (); | |
373 LABEL_NUSES (end_of_function_label) = 0; | |
374 | |
375 /* Put the label before an USE insns that may precede the RETURN insn. */ | |
376 while (GET_CODE (temp) == USE) | |
377 temp = PREV_INSN (temp); | |
378 | |
379 emit_label_after (end_of_function_label, temp); | |
380 } | |
381 | |
382 else if (LABEL_P (insn)) | |
383 end_of_function_label = insn; | |
384 else | |
385 { | |
386 end_of_function_label = gen_label_rtx (); | |
387 LABEL_NUSES (end_of_function_label) = 0; | |
388 /* If the basic block reorder pass moves the return insn to | |
389 some other place try to locate it again and put our | |
390 end_of_function_label there. */ | |
391 while (insn && ! (JUMP_P (insn) | |
392 && (GET_CODE (PATTERN (insn)) == RETURN))) | |
393 insn = PREV_INSN (insn); | |
394 if (insn) | |
395 { | |
396 insn = PREV_INSN (insn); | |
397 | |
398 /* Put the label before an USE insns that may proceed the | |
399 RETURN insn. */ | |
400 while (GET_CODE (insn) == USE) | |
401 insn = PREV_INSN (insn); | |
402 | |
403 emit_label_after (end_of_function_label, insn); | |
404 } | |
405 else | |
406 { | |
407 #ifdef HAVE_epilogue | |
408 if (HAVE_epilogue | |
409 #ifdef HAVE_return | |
410 && ! HAVE_return | |
411 #endif | |
412 ) | |
413 { | |
414 /* The RETURN insn has its delay slot filled so we cannot | |
415 emit the label just before it. Since we already have | |
416 an epilogue and cannot emit a new RETURN, we cannot | |
417 emit the label at all. */ | |
418 end_of_function_label = NULL_RTX; | |
419 return end_of_function_label; | |
420 } | |
421 #endif /* HAVE_epilogue */ | |
422 | |
423 /* Otherwise, make a new label and emit a RETURN and BARRIER, | |
424 if needed. */ | |
425 emit_label (end_of_function_label); | |
426 #ifdef HAVE_return | |
427 /* We don't bother trying to create a return insn if the | |
428 epilogue has filled delay-slots; we would have to try and | |
429 move the delay-slot fillers to the delay-slots for the new | |
430 return insn or in front of the new return insn. */ | |
431 if (crtl->epilogue_delay_list == NULL | |
432 && HAVE_return) | |
433 { | |
434 /* The return we make may have delay slots too. */ | |
435 rtx insn = gen_return (); | |
436 insn = emit_jump_insn (insn); | |
437 emit_barrier (); | |
438 if (num_delay_slots (insn) > 0) | |
439 obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
440 } | |
441 #endif | |
442 } | |
443 } | |
444 | |
445 /* Show one additional use for this label so it won't go away until | |
446 we are done. */ | |
447 ++LABEL_NUSES (end_of_function_label); | |
448 | |
449 return end_of_function_label; | |
450 } | |
451 | |
452 /* Put INSN and LIST together in a SEQUENCE rtx of LENGTH, and replace | |
453 the pattern of INSN with the SEQUENCE. | |
454 | |
455 Chain the insns so that NEXT_INSN of each insn in the sequence points to | |
456 the next and NEXT_INSN of the last insn in the sequence points to | |
457 the first insn after the sequence. Similarly for PREV_INSN. This makes | |
458 it easier to scan all insns. | |
459 | |
460 Returns the SEQUENCE that replaces INSN. */ | |
461 | |
462 static rtx | |
463 emit_delay_sequence (rtx insn, rtx list, int length) | |
464 { | |
465 int i = 1; | |
466 rtx li; | |
467 int had_barrier = 0; | |
468 | |
469 /* Allocate the rtvec to hold the insns and the SEQUENCE. */ | |
470 rtvec seqv = rtvec_alloc (length + 1); | |
471 rtx seq = gen_rtx_SEQUENCE (VOIDmode, seqv); | |
472 rtx seq_insn = make_insn_raw (seq); | |
473 rtx first = get_insns (); | |
474 rtx last = get_last_insn (); | |
475 | |
476 /* Make a copy of the insn having delay slots. */ | |
477 rtx delay_insn = copy_rtx (insn); | |
478 | |
479 /* If INSN is followed by a BARRIER, delete the BARRIER since it will only | |
480 confuse further processing. Update LAST in case it was the last insn. | |
481 We will put the BARRIER back in later. */ | |
482 if (NEXT_INSN (insn) && BARRIER_P (NEXT_INSN (insn))) | |
483 { | |
484 delete_related_insns (NEXT_INSN (insn)); | |
485 last = get_last_insn (); | |
486 had_barrier = 1; | |
487 } | |
488 | |
489 /* Splice our SEQUENCE into the insn stream where INSN used to be. */ | |
490 NEXT_INSN (seq_insn) = NEXT_INSN (insn); | |
491 PREV_INSN (seq_insn) = PREV_INSN (insn); | |
492 | |
493 if (insn != last) | |
494 PREV_INSN (NEXT_INSN (seq_insn)) = seq_insn; | |
495 | |
496 if (insn != first) | |
497 NEXT_INSN (PREV_INSN (seq_insn)) = seq_insn; | |
498 | |
499 /* Note the calls to set_new_first_and_last_insn must occur after | |
500 SEQ_INSN has been completely spliced into the insn stream. | |
501 | |
502 Otherwise CUR_INSN_UID will get set to an incorrect value because | |
503 set_new_first_and_last_insn will not find SEQ_INSN in the chain. */ | |
504 if (insn == last) | |
505 set_new_first_and_last_insn (first, seq_insn); | |
506 | |
507 if (insn == first) | |
508 set_new_first_and_last_insn (seq_insn, last); | |
509 | |
510 /* Build our SEQUENCE and rebuild the insn chain. */ | |
511 XVECEXP (seq, 0, 0) = delay_insn; | |
512 INSN_DELETED_P (delay_insn) = 0; | |
513 PREV_INSN (delay_insn) = PREV_INSN (seq_insn); | |
514 | |
515 INSN_LOCATOR (seq_insn) = INSN_LOCATOR (delay_insn); | |
516 | |
517 for (li = list; li; li = XEXP (li, 1), i++) | |
518 { | |
519 rtx tem = XEXP (li, 0); | |
520 rtx note, next; | |
521 | |
522 /* Show that this copy of the insn isn't deleted. */ | |
523 INSN_DELETED_P (tem) = 0; | |
524 | |
525 XVECEXP (seq, 0, i) = tem; | |
526 PREV_INSN (tem) = XVECEXP (seq, 0, i - 1); | |
527 NEXT_INSN (XVECEXP (seq, 0, i - 1)) = tem; | |
528 | |
529 /* SPARC assembler, for instance, emit warning when debug info is output | |
530 into the delay slot. */ | |
531 if (INSN_LOCATOR (tem) && !INSN_LOCATOR (seq_insn)) | |
532 INSN_LOCATOR (seq_insn) = INSN_LOCATOR (tem); | |
533 INSN_LOCATOR (tem) = 0; | |
534 | |
535 for (note = REG_NOTES (tem); note; note = next) | |
536 { | |
537 next = XEXP (note, 1); | |
538 switch (REG_NOTE_KIND (note)) | |
539 { | |
540 case REG_DEAD: | |
541 /* Remove any REG_DEAD notes because we can't rely on them now | |
542 that the insn has been moved. */ | |
543 remove_note (tem, note); | |
544 break; | |
545 | |
546 case REG_LABEL_OPERAND: | |
547 case REG_LABEL_TARGET: | |
548 /* Keep the label reference count up to date. */ | |
549 if (LABEL_P (XEXP (note, 0))) | |
550 LABEL_NUSES (XEXP (note, 0)) ++; | |
551 break; | |
552 | |
553 default: | |
554 break; | |
555 } | |
556 } | |
557 } | |
558 | |
559 NEXT_INSN (XVECEXP (seq, 0, length)) = NEXT_INSN (seq_insn); | |
560 | |
561 /* If the previous insn is a SEQUENCE, update the NEXT_INSN pointer on the | |
562 last insn in that SEQUENCE to point to us. Similarly for the first | |
563 insn in the following insn if it is a SEQUENCE. */ | |
564 | |
565 if (PREV_INSN (seq_insn) && NONJUMP_INSN_P (PREV_INSN (seq_insn)) | |
566 && GET_CODE (PATTERN (PREV_INSN (seq_insn))) == SEQUENCE) | |
567 NEXT_INSN (XVECEXP (PATTERN (PREV_INSN (seq_insn)), 0, | |
568 XVECLEN (PATTERN (PREV_INSN (seq_insn)), 0) - 1)) | |
569 = seq_insn; | |
570 | |
571 if (NEXT_INSN (seq_insn) && NONJUMP_INSN_P (NEXT_INSN (seq_insn)) | |
572 && GET_CODE (PATTERN (NEXT_INSN (seq_insn))) == SEQUENCE) | |
573 PREV_INSN (XVECEXP (PATTERN (NEXT_INSN (seq_insn)), 0, 0)) = seq_insn; | |
574 | |
575 /* If there used to be a BARRIER, put it back. */ | |
576 if (had_barrier) | |
577 emit_barrier_after (seq_insn); | |
578 | |
579 gcc_assert (i == length + 1); | |
580 | |
581 return seq_insn; | |
582 } | |
583 | |
584 /* Add INSN to DELAY_LIST and return the head of the new list. The list must | |
585 be in the order in which the insns are to be executed. */ | |
586 | |
587 static rtx | |
588 add_to_delay_list (rtx insn, rtx delay_list) | |
589 { | |
590 /* If we have an empty list, just make a new list element. If | |
591 INSN has its block number recorded, clear it since we may | |
592 be moving the insn to a new block. */ | |
593 | |
594 if (delay_list == 0) | |
595 { | |
596 clear_hashed_info_for_insn (insn); | |
597 return gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX); | |
598 } | |
599 | |
600 /* Otherwise this must be an INSN_LIST. Add INSN to the end of the | |
601 list. */ | |
602 XEXP (delay_list, 1) = add_to_delay_list (insn, XEXP (delay_list, 1)); | |
603 | |
604 return delay_list; | |
605 } | |
606 | |
607 /* Delete INSN from the delay slot of the insn that it is in, which may | |
608 produce an insn with no delay slots. Return the new insn. */ | |
609 | |
610 static rtx | |
611 delete_from_delay_slot (rtx insn) | |
612 { | |
613 rtx trial, seq_insn, seq, prev; | |
614 rtx delay_list = 0; | |
615 int i; | |
616 int had_barrier = 0; | |
617 | |
618 /* We first must find the insn containing the SEQUENCE with INSN in its | |
619 delay slot. Do this by finding an insn, TRIAL, where | |
620 PREV_INSN (NEXT_INSN (TRIAL)) != TRIAL. */ | |
621 | |
622 for (trial = insn; | |
623 PREV_INSN (NEXT_INSN (trial)) == trial; | |
624 trial = NEXT_INSN (trial)) | |
625 ; | |
626 | |
627 seq_insn = PREV_INSN (NEXT_INSN (trial)); | |
628 seq = PATTERN (seq_insn); | |
629 | |
630 if (NEXT_INSN (seq_insn) && BARRIER_P (NEXT_INSN (seq_insn))) | |
631 had_barrier = 1; | |
632 | |
633 /* Create a delay list consisting of all the insns other than the one | |
634 we are deleting (unless we were the only one). */ | |
635 if (XVECLEN (seq, 0) > 2) | |
636 for (i = 1; i < XVECLEN (seq, 0); i++) | |
637 if (XVECEXP (seq, 0, i) != insn) | |
638 delay_list = add_to_delay_list (XVECEXP (seq, 0, i), delay_list); | |
639 | |
640 /* Delete the old SEQUENCE, re-emit the insn that used to have the delay | |
641 list, and rebuild the delay list if non-empty. */ | |
642 prev = PREV_INSN (seq_insn); | |
643 trial = XVECEXP (seq, 0, 0); | |
644 delete_related_insns (seq_insn); | |
645 add_insn_after (trial, prev, NULL); | |
646 | |
647 /* If there was a barrier after the old SEQUENCE, remit it. */ | |
648 if (had_barrier) | |
649 emit_barrier_after (trial); | |
650 | |
651 /* If there are any delay insns, remit them. Otherwise clear the | |
652 annul flag. */ | |
653 if (delay_list) | |
654 trial = emit_delay_sequence (trial, delay_list, XVECLEN (seq, 0) - 2); | |
655 else if (INSN_P (trial)) | |
656 INSN_ANNULLED_BRANCH_P (trial) = 0; | |
657 | |
658 INSN_FROM_TARGET_P (insn) = 0; | |
659 | |
660 /* Show we need to fill this insn again. */ | |
661 obstack_ptr_grow (&unfilled_slots_obstack, trial); | |
662 | |
663 return trial; | |
664 } | |
665 | |
666 /* Delete INSN, a JUMP_INSN. If it is a conditional jump, we must track down | |
667 the insn that sets CC0 for it and delete it too. */ | |
668 | |
669 static void | |
670 delete_scheduled_jump (rtx insn) | |
671 { | |
672 /* Delete the insn that sets cc0 for us. On machines without cc0, we could | |
673 delete the insn that sets the condition code, but it is hard to find it. | |
674 Since this case is rare anyway, don't bother trying; there would likely | |
675 be other insns that became dead anyway, which we wouldn't know to | |
676 delete. */ | |
677 | |
678 #ifdef HAVE_cc0 | |
679 if (reg_mentioned_p (cc0_rtx, insn)) | |
680 { | |
681 rtx note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX); | |
682 | |
683 /* If a reg-note was found, it points to an insn to set CC0. This | |
684 insn is in the delay list of some other insn. So delete it from | |
685 the delay list it was in. */ | |
686 if (note) | |
687 { | |
688 if (! FIND_REG_INC_NOTE (XEXP (note, 0), NULL_RTX) | |
689 && sets_cc0_p (PATTERN (XEXP (note, 0))) == 1) | |
690 delete_from_delay_slot (XEXP (note, 0)); | |
691 } | |
692 else | |
693 { | |
694 /* The insn setting CC0 is our previous insn, but it may be in | |
695 a delay slot. It will be the last insn in the delay slot, if | |
696 it is. */ | |
697 rtx trial = previous_insn (insn); | |
698 if (NOTE_P (trial)) | |
699 trial = prev_nonnote_insn (trial); | |
700 if (sets_cc0_p (PATTERN (trial)) != 1 | |
701 || FIND_REG_INC_NOTE (trial, NULL_RTX)) | |
702 return; | |
703 if (PREV_INSN (NEXT_INSN (trial)) == trial) | |
704 delete_related_insns (trial); | |
705 else | |
706 delete_from_delay_slot (trial); | |
707 } | |
708 } | |
709 #endif | |
710 | |
711 delete_related_insns (insn); | |
712 } | |
713 | |
714 /* Counters for delay-slot filling. */ | |
715 | |
716 #define NUM_REORG_FUNCTIONS 2 | |
717 #define MAX_DELAY_HISTOGRAM 3 | |
718 #define MAX_REORG_PASSES 2 | |
719 | |
720 static int num_insns_needing_delays[NUM_REORG_FUNCTIONS][MAX_REORG_PASSES]; | |
721 | |
722 static int num_filled_delays[NUM_REORG_FUNCTIONS][MAX_DELAY_HISTOGRAM+1][MAX_REORG_PASSES]; | |
723 | |
724 static int reorg_pass_number; | |
725 | |
726 static void | |
727 note_delay_statistics (int slots_filled, int index) | |
728 { | |
729 num_insns_needing_delays[index][reorg_pass_number]++; | |
730 if (slots_filled > MAX_DELAY_HISTOGRAM) | |
731 slots_filled = MAX_DELAY_HISTOGRAM; | |
732 num_filled_delays[index][slots_filled][reorg_pass_number]++; | |
733 } | |
734 | |
735 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) | |
736 | |
737 /* Optimize the following cases: | |
738 | |
739 1. When a conditional branch skips over only one instruction, | |
740 use an annulling branch and put that insn in the delay slot. | |
741 Use either a branch that annuls when the condition if true or | |
742 invert the test with a branch that annuls when the condition is | |
743 false. This saves insns, since otherwise we must copy an insn | |
744 from the L1 target. | |
745 | |
746 (orig) (skip) (otherwise) | |
747 Bcc.n L1 Bcc',a L1 Bcc,a L1' | |
748 insn insn insn2 | |
749 L1: L1: L1: | |
750 insn2 insn2 insn2 | |
751 insn3 insn3 L1': | |
752 insn3 | |
753 | |
754 2. When a conditional branch skips over only one instruction, | |
755 and after that, it unconditionally branches somewhere else, | |
756 perform the similar optimization. This saves executing the | |
757 second branch in the case where the inverted condition is true. | |
758 | |
759 Bcc.n L1 Bcc',a L2 | |
760 insn insn | |
761 L1: L1: | |
762 Bra L2 Bra L2 | |
763 | |
764 INSN is a JUMP_INSN. | |
765 | |
766 This should be expanded to skip over N insns, where N is the number | |
767 of delay slots required. */ | |
768 | |
769 static rtx | |
770 optimize_skip (rtx insn) | |
771 { | |
772 rtx trial = next_nonnote_insn (insn); | |
773 rtx next_trial = next_active_insn (trial); | |
774 rtx delay_list = 0; | |
775 int flags; | |
776 | |
777 flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
778 | |
779 if (trial == 0 | |
780 || !NONJUMP_INSN_P (trial) | |
781 || GET_CODE (PATTERN (trial)) == SEQUENCE | |
782 || recog_memoized (trial) < 0 | |
783 || (! eligible_for_annul_false (insn, 0, trial, flags) | |
784 && ! eligible_for_annul_true (insn, 0, trial, flags)) | |
785 || can_throw_internal (trial)) | |
786 return 0; | |
787 | |
788 /* There are two cases where we are just executing one insn (we assume | |
789 here that a branch requires only one insn; this should be generalized | |
790 at some point): Where the branch goes around a single insn or where | |
791 we have one insn followed by a branch to the same label we branch to. | |
792 In both of these cases, inverting the jump and annulling the delay | |
793 slot give the same effect in fewer insns. */ | |
794 if ((next_trial == next_active_insn (JUMP_LABEL (insn)) | |
795 && ! (next_trial == 0 && crtl->epilogue_delay_list != 0)) | |
796 || (next_trial != 0 | |
797 && JUMP_P (next_trial) | |
798 && JUMP_LABEL (insn) == JUMP_LABEL (next_trial) | |
799 && (simplejump_p (next_trial) | |
800 || GET_CODE (PATTERN (next_trial)) == RETURN))) | |
801 { | |
802 if (eligible_for_annul_false (insn, 0, trial, flags)) | |
803 { | |
804 if (invert_jump (insn, JUMP_LABEL (insn), 1)) | |
805 INSN_FROM_TARGET_P (trial) = 1; | |
806 else if (! eligible_for_annul_true (insn, 0, trial, flags)) | |
807 return 0; | |
808 } | |
809 | |
810 delay_list = add_to_delay_list (trial, NULL_RTX); | |
811 next_trial = next_active_insn (trial); | |
812 update_block (trial, trial); | |
813 delete_related_insns (trial); | |
814 | |
815 /* Also, if we are targeting an unconditional | |
816 branch, thread our jump to the target of that branch. Don't | |
817 change this into a RETURN here, because it may not accept what | |
818 we have in the delay slot. We'll fix this up later. */ | |
819 if (next_trial && JUMP_P (next_trial) | |
820 && (simplejump_p (next_trial) | |
821 || GET_CODE (PATTERN (next_trial)) == RETURN)) | |
822 { | |
823 rtx target_label = JUMP_LABEL (next_trial); | |
824 if (target_label == 0) | |
825 target_label = find_end_label (); | |
826 | |
827 if (target_label) | |
828 { | |
829 /* Recompute the flags based on TARGET_LABEL since threading | |
830 the jump to TARGET_LABEL may change the direction of the | |
831 jump (which may change the circumstances in which the | |
832 delay slot is nullified). */ | |
833 flags = get_jump_flags (insn, target_label); | |
834 if (eligible_for_annul_true (insn, 0, trial, flags)) | |
835 reorg_redirect_jump (insn, target_label); | |
836 } | |
837 } | |
838 | |
839 INSN_ANNULLED_BRANCH_P (insn) = 1; | |
840 } | |
841 | |
842 return delay_list; | |
843 } | |
844 #endif | |
845 | |
846 /* Encode and return branch direction and prediction information for | |
847 INSN assuming it will jump to LABEL. | |
848 | |
849 Non conditional branches return no direction information and | |
850 are predicted as very likely taken. */ | |
851 | |
852 static int | |
853 get_jump_flags (rtx insn, rtx label) | |
854 { | |
855 int flags; | |
856 | |
857 /* get_jump_flags can be passed any insn with delay slots, these may | |
858 be INSNs, CALL_INSNs, or JUMP_INSNs. Only JUMP_INSNs have branch | |
859 direction information, and only if they are conditional jumps. | |
860 | |
861 If LABEL is zero, then there is no way to determine the branch | |
862 direction. */ | |
863 if (JUMP_P (insn) | |
864 && (condjump_p (insn) || condjump_in_parallel_p (insn)) | |
865 && INSN_UID (insn) <= max_uid | |
866 && label != 0 | |
867 && INSN_UID (label) <= max_uid) | |
868 flags | |
869 = (uid_to_ruid[INSN_UID (label)] > uid_to_ruid[INSN_UID (insn)]) | |
870 ? ATTR_FLAG_forward : ATTR_FLAG_backward; | |
871 /* No valid direction information. */ | |
872 else | |
873 flags = 0; | |
874 | |
875 /* If insn is a conditional branch call mostly_true_jump to get | |
876 determine the branch prediction. | |
877 | |
878 Non conditional branches are predicted as very likely taken. */ | |
879 if (JUMP_P (insn) | |
880 && (condjump_p (insn) || condjump_in_parallel_p (insn))) | |
881 { | |
882 int prediction; | |
883 | |
884 prediction = mostly_true_jump (insn, get_branch_condition (insn, label)); | |
885 switch (prediction) | |
886 { | |
887 case 2: | |
888 flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely); | |
889 break; | |
890 case 1: | |
891 flags |= ATTR_FLAG_likely; | |
892 break; | |
893 case 0: | |
894 flags |= ATTR_FLAG_unlikely; | |
895 break; | |
896 case -1: | |
897 flags |= (ATTR_FLAG_very_unlikely | ATTR_FLAG_unlikely); | |
898 break; | |
899 | |
900 default: | |
901 gcc_unreachable (); | |
902 } | |
903 } | |
904 else | |
905 flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely); | |
906 | |
907 return flags; | |
908 } | |
909 | |
910 /* Return 1 if INSN is a destination that will be branched to rarely (the | |
911 return point of a function); return 2 if DEST will be branched to very | |
912 rarely (a call to a function that doesn't return). Otherwise, | |
913 return 0. */ | |
914 | |
915 static int | |
916 rare_destination (rtx insn) | |
917 { | |
918 int jump_count = 0; | |
919 rtx next; | |
920 | |
921 for (; insn; insn = next) | |
922 { | |
923 if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == SEQUENCE) | |
924 insn = XVECEXP (PATTERN (insn), 0, 0); | |
925 | |
926 next = NEXT_INSN (insn); | |
927 | |
928 switch (GET_CODE (insn)) | |
929 { | |
930 case CODE_LABEL: | |
931 return 0; | |
932 case BARRIER: | |
933 /* A BARRIER can either be after a JUMP_INSN or a CALL_INSN. We | |
934 don't scan past JUMP_INSNs, so any barrier we find here must | |
935 have been after a CALL_INSN and hence mean the call doesn't | |
936 return. */ | |
937 return 2; | |
938 case JUMP_INSN: | |
939 if (GET_CODE (PATTERN (insn)) == RETURN) | |
940 return 1; | |
941 else if (simplejump_p (insn) | |
942 && jump_count++ < 10) | |
943 next = JUMP_LABEL (insn); | |
944 else | |
945 return 0; | |
946 | |
947 default: | |
948 break; | |
949 } | |
950 } | |
951 | |
952 /* If we got here it means we hit the end of the function. So this | |
953 is an unlikely destination. */ | |
954 | |
955 return 1; | |
956 } | |
957 | |
958 /* Return truth value of the statement that this branch | |
959 is mostly taken. If we think that the branch is extremely likely | |
960 to be taken, we return 2. If the branch is slightly more likely to be | |
961 taken, return 1. If the branch is slightly less likely to be taken, | |
962 return 0 and if the branch is highly unlikely to be taken, return -1. | |
963 | |
964 CONDITION, if nonzero, is the condition that JUMP_INSN is testing. */ | |
965 | |
966 static int | |
967 mostly_true_jump (rtx jump_insn, rtx condition) | |
968 { | |
969 rtx target_label = JUMP_LABEL (jump_insn); | |
970 rtx note; | |
971 int rare_dest, rare_fallthrough; | |
972 | |
973 /* If branch probabilities are available, then use that number since it | |
974 always gives a correct answer. */ | |
975 note = find_reg_note (jump_insn, REG_BR_PROB, 0); | |
976 if (note) | |
977 { | |
978 int prob = INTVAL (XEXP (note, 0)); | |
979 | |
980 if (prob >= REG_BR_PROB_BASE * 9 / 10) | |
981 return 2; | |
982 else if (prob >= REG_BR_PROB_BASE / 2) | |
983 return 1; | |
984 else if (prob >= REG_BR_PROB_BASE / 10) | |
985 return 0; | |
986 else | |
987 return -1; | |
988 } | |
989 | |
990 /* Look at the relative rarities of the fallthrough and destination. If | |
991 they differ, we can predict the branch that way. */ | |
992 rare_dest = rare_destination (target_label); | |
993 rare_fallthrough = rare_destination (NEXT_INSN (jump_insn)); | |
994 | |
995 switch (rare_fallthrough - rare_dest) | |
996 { | |
997 case -2: | |
998 return -1; | |
999 case -1: | |
1000 return 0; | |
1001 case 0: | |
1002 break; | |
1003 case 1: | |
1004 return 1; | |
1005 case 2: | |
1006 return 2; | |
1007 } | |
1008 | |
1009 /* If we couldn't figure out what this jump was, assume it won't be | |
1010 taken. This should be rare. */ | |
1011 if (condition == 0) | |
1012 return 0; | |
1013 | |
1014 /* Predict backward branches usually take, forward branches usually not. If | |
1015 we don't know whether this is forward or backward, assume the branch | |
1016 will be taken, since most are. */ | |
1017 return (target_label == 0 || INSN_UID (jump_insn) > max_uid | |
1018 || INSN_UID (target_label) > max_uid | |
1019 || (uid_to_ruid[INSN_UID (jump_insn)] | |
1020 > uid_to_ruid[INSN_UID (target_label)])); | |
1021 } | |
1022 | |
1023 /* Return the condition under which INSN will branch to TARGET. If TARGET | |
1024 is zero, return the condition under which INSN will return. If INSN is | |
1025 an unconditional branch, return const_true_rtx. If INSN isn't a simple | |
1026 type of jump, or it doesn't go to TARGET, return 0. */ | |
1027 | |
1028 static rtx | |
1029 get_branch_condition (rtx insn, rtx target) | |
1030 { | |
1031 rtx pat = PATTERN (insn); | |
1032 rtx src; | |
1033 | |
1034 if (condjump_in_parallel_p (insn)) | |
1035 pat = XVECEXP (pat, 0, 0); | |
1036 | |
1037 if (GET_CODE (pat) == RETURN) | |
1038 return target == 0 ? const_true_rtx : 0; | |
1039 | |
1040 else if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx) | |
1041 return 0; | |
1042 | |
1043 src = SET_SRC (pat); | |
1044 if (GET_CODE (src) == LABEL_REF && XEXP (src, 0) == target) | |
1045 return const_true_rtx; | |
1046 | |
1047 else if (GET_CODE (src) == IF_THEN_ELSE | |
1048 && ((target == 0 && GET_CODE (XEXP (src, 1)) == RETURN) | |
1049 || (GET_CODE (XEXP (src, 1)) == LABEL_REF | |
1050 && XEXP (XEXP (src, 1), 0) == target)) | |
1051 && XEXP (src, 2) == pc_rtx) | |
1052 return XEXP (src, 0); | |
1053 | |
1054 else if (GET_CODE (src) == IF_THEN_ELSE | |
1055 && ((target == 0 && GET_CODE (XEXP (src, 2)) == RETURN) | |
1056 || (GET_CODE (XEXP (src, 2)) == LABEL_REF | |
1057 && XEXP (XEXP (src, 2), 0) == target)) | |
1058 && XEXP (src, 1) == pc_rtx) | |
1059 { | |
1060 enum rtx_code rev; | |
1061 rev = reversed_comparison_code (XEXP (src, 0), insn); | |
1062 if (rev != UNKNOWN) | |
1063 return gen_rtx_fmt_ee (rev, GET_MODE (XEXP (src, 0)), | |
1064 XEXP (XEXP (src, 0), 0), | |
1065 XEXP (XEXP (src, 0), 1)); | |
1066 } | |
1067 | |
1068 return 0; | |
1069 } | |
1070 | |
1071 /* Return nonzero if CONDITION is more strict than the condition of | |
1072 INSN, i.e., if INSN will always branch if CONDITION is true. */ | |
1073 | |
1074 static int | |
1075 condition_dominates_p (rtx condition, rtx insn) | |
1076 { | |
1077 rtx other_condition = get_branch_condition (insn, JUMP_LABEL (insn)); | |
1078 enum rtx_code code = GET_CODE (condition); | |
1079 enum rtx_code other_code; | |
1080 | |
1081 if (rtx_equal_p (condition, other_condition) | |
1082 || other_condition == const_true_rtx) | |
1083 return 1; | |
1084 | |
1085 else if (condition == const_true_rtx || other_condition == 0) | |
1086 return 0; | |
1087 | |
1088 other_code = GET_CODE (other_condition); | |
1089 if (GET_RTX_LENGTH (code) != 2 || GET_RTX_LENGTH (other_code) != 2 | |
1090 || ! rtx_equal_p (XEXP (condition, 0), XEXP (other_condition, 0)) | |
1091 || ! rtx_equal_p (XEXP (condition, 1), XEXP (other_condition, 1))) | |
1092 return 0; | |
1093 | |
1094 return comparison_dominates_p (code, other_code); | |
1095 } | |
1096 | |
1097 /* Return nonzero if redirecting JUMP to NEWLABEL does not invalidate | |
1098 any insns already in the delay slot of JUMP. */ | |
1099 | |
1100 static int | |
1101 redirect_with_delay_slots_safe_p (rtx jump, rtx newlabel, rtx seq) | |
1102 { | |
1103 int flags, i; | |
1104 rtx pat = PATTERN (seq); | |
1105 | |
1106 /* Make sure all the delay slots of this jump would still | |
1107 be valid after threading the jump. If they are still | |
1108 valid, then return nonzero. */ | |
1109 | |
1110 flags = get_jump_flags (jump, newlabel); | |
1111 for (i = 1; i < XVECLEN (pat, 0); i++) | |
1112 if (! ( | |
1113 #ifdef ANNUL_IFFALSE_SLOTS | |
1114 (INSN_ANNULLED_BRANCH_P (jump) | |
1115 && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
1116 ? eligible_for_annul_false (jump, i - 1, | |
1117 XVECEXP (pat, 0, i), flags) : | |
1118 #endif | |
1119 #ifdef ANNUL_IFTRUE_SLOTS | |
1120 (INSN_ANNULLED_BRANCH_P (jump) | |
1121 && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
1122 ? eligible_for_annul_true (jump, i - 1, | |
1123 XVECEXP (pat, 0, i), flags) : | |
1124 #endif | |
1125 eligible_for_delay (jump, i - 1, XVECEXP (pat, 0, i), flags))) | |
1126 break; | |
1127 | |
1128 return (i == XVECLEN (pat, 0)); | |
1129 } | |
1130 | |
1131 /* Return nonzero if redirecting JUMP to NEWLABEL does not invalidate | |
1132 any insns we wish to place in the delay slot of JUMP. */ | |
1133 | |
1134 static int | |
1135 redirect_with_delay_list_safe_p (rtx jump, rtx newlabel, rtx delay_list) | |
1136 { | |
1137 int flags, i; | |
1138 rtx li; | |
1139 | |
1140 /* Make sure all the insns in DELAY_LIST would still be | |
1141 valid after threading the jump. If they are still | |
1142 valid, then return nonzero. */ | |
1143 | |
1144 flags = get_jump_flags (jump, newlabel); | |
1145 for (li = delay_list, i = 0; li; li = XEXP (li, 1), i++) | |
1146 if (! ( | |
1147 #ifdef ANNUL_IFFALSE_SLOTS | |
1148 (INSN_ANNULLED_BRANCH_P (jump) | |
1149 && INSN_FROM_TARGET_P (XEXP (li, 0))) | |
1150 ? eligible_for_annul_false (jump, i, XEXP (li, 0), flags) : | |
1151 #endif | |
1152 #ifdef ANNUL_IFTRUE_SLOTS | |
1153 (INSN_ANNULLED_BRANCH_P (jump) | |
1154 && ! INSN_FROM_TARGET_P (XEXP (li, 0))) | |
1155 ? eligible_for_annul_true (jump, i, XEXP (li, 0), flags) : | |
1156 #endif | |
1157 eligible_for_delay (jump, i, XEXP (li, 0), flags))) | |
1158 break; | |
1159 | |
1160 return (li == NULL); | |
1161 } | |
1162 | |
1163 /* DELAY_LIST is a list of insns that have already been placed into delay | |
1164 slots. See if all of them have the same annulling status as ANNUL_TRUE_P. | |
1165 If not, return 0; otherwise return 1. */ | |
1166 | |
1167 static int | |
1168 check_annul_list_true_false (int annul_true_p, rtx delay_list) | |
1169 { | |
1170 rtx temp; | |
1171 | |
1172 if (delay_list) | |
1173 { | |
1174 for (temp = delay_list; temp; temp = XEXP (temp, 1)) | |
1175 { | |
1176 rtx trial = XEXP (temp, 0); | |
1177 | |
1178 if ((annul_true_p && INSN_FROM_TARGET_P (trial)) | |
1179 || (!annul_true_p && !INSN_FROM_TARGET_P (trial))) | |
1180 return 0; | |
1181 } | |
1182 } | |
1183 | |
1184 return 1; | |
1185 } | |
1186 | |
1187 /* INSN branches to an insn whose pattern SEQ is a SEQUENCE. Given that | |
1188 the condition tested by INSN is CONDITION and the resources shown in | |
1189 OTHER_NEEDED are needed after INSN, see whether INSN can take all the insns | |
1190 from SEQ's delay list, in addition to whatever insns it may execute | |
1191 (in DELAY_LIST). SETS and NEEDED are denote resources already set and | |
1192 needed while searching for delay slot insns. Return the concatenated | |
1193 delay list if possible, otherwise, return 0. | |
1194 | |
1195 SLOTS_TO_FILL is the total number of slots required by INSN, and | |
1196 PSLOTS_FILLED points to the number filled so far (also the number of | |
1197 insns in DELAY_LIST). It is updated with the number that have been | |
1198 filled from the SEQUENCE, if any. | |
1199 | |
1200 PANNUL_P points to a nonzero value if we already know that we need | |
1201 to annul INSN. If this routine determines that annulling is needed, | |
1202 it may set that value nonzero. | |
1203 | |
1204 PNEW_THREAD points to a location that is to receive the place at which | |
1205 execution should continue. */ | |
1206 | |
1207 static rtx | |
1208 steal_delay_list_from_target (rtx insn, rtx condition, rtx seq, | |
1209 rtx delay_list, struct resources *sets, | |
1210 struct resources *needed, | |
1211 struct resources *other_needed, | |
1212 int slots_to_fill, int *pslots_filled, | |
1213 int *pannul_p, rtx *pnew_thread) | |
1214 { | |
1215 rtx temp; | |
1216 int slots_remaining = slots_to_fill - *pslots_filled; | |
1217 int total_slots_filled = *pslots_filled; | |
1218 rtx new_delay_list = 0; | |
1219 int must_annul = *pannul_p; | |
1220 int used_annul = 0; | |
1221 int i; | |
1222 struct resources cc_set; | |
1223 | |
1224 /* We can't do anything if there are more delay slots in SEQ than we | |
1225 can handle, or if we don't know that it will be a taken branch. | |
1226 We know that it will be a taken branch if it is either an unconditional | |
1227 branch or a conditional branch with a stricter branch condition. | |
1228 | |
1229 Also, exit if the branch has more than one set, since then it is computing | |
1230 other results that can't be ignored, e.g. the HPPA mov&branch instruction. | |
1231 ??? It may be possible to move other sets into INSN in addition to | |
1232 moving the instructions in the delay slots. | |
1233 | |
1234 We can not steal the delay list if one of the instructions in the | |
1235 current delay_list modifies the condition codes and the jump in the | |
1236 sequence is a conditional jump. We can not do this because we can | |
1237 not change the direction of the jump because the condition codes | |
1238 will effect the direction of the jump in the sequence. */ | |
1239 | |
1240 CLEAR_RESOURCE (&cc_set); | |
1241 for (temp = delay_list; temp; temp = XEXP (temp, 1)) | |
1242 { | |
1243 rtx trial = XEXP (temp, 0); | |
1244 | |
1245 mark_set_resources (trial, &cc_set, 0, MARK_SRC_DEST_CALL); | |
1246 if (insn_references_resource_p (XVECEXP (seq , 0, 0), &cc_set, 0)) | |
1247 return delay_list; | |
1248 } | |
1249 | |
1250 if (XVECLEN (seq, 0) - 1 > slots_remaining | |
1251 || ! condition_dominates_p (condition, XVECEXP (seq, 0, 0)) | |
1252 || ! single_set (XVECEXP (seq, 0, 0))) | |
1253 return delay_list; | |
1254 | |
1255 #ifdef MD_CAN_REDIRECT_BRANCH | |
1256 /* On some targets, branches with delay slots can have a limited | |
1257 displacement. Give the back end a chance to tell us we can't do | |
1258 this. */ | |
1259 if (! MD_CAN_REDIRECT_BRANCH (insn, XVECEXP (seq, 0, 0))) | |
1260 return delay_list; | |
1261 #endif | |
1262 | |
1263 for (i = 1; i < XVECLEN (seq, 0); i++) | |
1264 { | |
1265 rtx trial = XVECEXP (seq, 0, i); | |
1266 int flags; | |
1267 | |
1268 if (insn_references_resource_p (trial, sets, 0) | |
1269 || insn_sets_resource_p (trial, needed, 0) | |
1270 || insn_sets_resource_p (trial, sets, 0) | |
1271 #ifdef HAVE_cc0 | |
1272 /* If TRIAL sets CC0, we can't copy it, so we can't steal this | |
1273 delay list. */ | |
1274 || find_reg_note (trial, REG_CC_USER, NULL_RTX) | |
1275 #endif | |
1276 /* If TRIAL is from the fallthrough code of an annulled branch insn | |
1277 in SEQ, we cannot use it. */ | |
1278 || (INSN_ANNULLED_BRANCH_P (XVECEXP (seq, 0, 0)) | |
1279 && ! INSN_FROM_TARGET_P (trial))) | |
1280 return delay_list; | |
1281 | |
1282 /* If this insn was already done (usually in a previous delay slot), | |
1283 pretend we put it in our delay slot. */ | |
1284 if (redundant_insn (trial, insn, new_delay_list)) | |
1285 continue; | |
1286 | |
1287 /* We will end up re-vectoring this branch, so compute flags | |
1288 based on jumping to the new label. */ | |
1289 flags = get_jump_flags (insn, JUMP_LABEL (XVECEXP (seq, 0, 0))); | |
1290 | |
1291 if (! must_annul | |
1292 && ((condition == const_true_rtx | |
1293 || (! insn_sets_resource_p (trial, other_needed, 0) | |
1294 && ! may_trap_or_fault_p (PATTERN (trial))))) | |
1295 ? eligible_for_delay (insn, total_slots_filled, trial, flags) | |
1296 : (must_annul || (delay_list == NULL && new_delay_list == NULL)) | |
1297 && (must_annul = 1, | |
1298 check_annul_list_true_false (0, delay_list) | |
1299 && check_annul_list_true_false (0, new_delay_list) | |
1300 && eligible_for_annul_false (insn, total_slots_filled, | |
1301 trial, flags))) | |
1302 { | |
1303 if (must_annul) | |
1304 used_annul = 1; | |
1305 temp = copy_rtx (trial); | |
1306 INSN_FROM_TARGET_P (temp) = 1; | |
1307 new_delay_list = add_to_delay_list (temp, new_delay_list); | |
1308 total_slots_filled++; | |
1309 | |
1310 if (--slots_remaining == 0) | |
1311 break; | |
1312 } | |
1313 else | |
1314 return delay_list; | |
1315 } | |
1316 | |
1317 /* Show the place to which we will be branching. */ | |
1318 *pnew_thread = next_active_insn (JUMP_LABEL (XVECEXP (seq, 0, 0))); | |
1319 | |
1320 /* Add any new insns to the delay list and update the count of the | |
1321 number of slots filled. */ | |
1322 *pslots_filled = total_slots_filled; | |
1323 if (used_annul) | |
1324 *pannul_p = 1; | |
1325 | |
1326 if (delay_list == 0) | |
1327 return new_delay_list; | |
1328 | |
1329 for (temp = new_delay_list; temp; temp = XEXP (temp, 1)) | |
1330 delay_list = add_to_delay_list (XEXP (temp, 0), delay_list); | |
1331 | |
1332 return delay_list; | |
1333 } | |
1334 | |
1335 /* Similar to steal_delay_list_from_target except that SEQ is on the | |
1336 fallthrough path of INSN. Here we only do something if the delay insn | |
1337 of SEQ is an unconditional branch. In that case we steal its delay slot | |
1338 for INSN since unconditional branches are much easier to fill. */ | |
1339 | |
1340 static rtx | |
1341 steal_delay_list_from_fallthrough (rtx insn, rtx condition, rtx seq, | |
1342 rtx delay_list, struct resources *sets, | |
1343 struct resources *needed, | |
1344 struct resources *other_needed, | |
1345 int slots_to_fill, int *pslots_filled, | |
1346 int *pannul_p) | |
1347 { | |
1348 int i; | |
1349 int flags; | |
1350 int must_annul = *pannul_p; | |
1351 int used_annul = 0; | |
1352 | |
1353 flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
1354 | |
1355 /* We can't do anything if SEQ's delay insn isn't an | |
1356 unconditional branch. */ | |
1357 | |
1358 if (! simplejump_p (XVECEXP (seq, 0, 0)) | |
1359 && GET_CODE (PATTERN (XVECEXP (seq, 0, 0))) != RETURN) | |
1360 return delay_list; | |
1361 | |
1362 for (i = 1; i < XVECLEN (seq, 0); i++) | |
1363 { | |
1364 rtx trial = XVECEXP (seq, 0, i); | |
1365 | |
1366 /* If TRIAL sets CC0, stealing it will move it too far from the use | |
1367 of CC0. */ | |
1368 if (insn_references_resource_p (trial, sets, 0) | |
1369 || insn_sets_resource_p (trial, needed, 0) | |
1370 || insn_sets_resource_p (trial, sets, 0) | |
1371 #ifdef HAVE_cc0 | |
1372 || sets_cc0_p (PATTERN (trial)) | |
1373 #endif | |
1374 ) | |
1375 | |
1376 break; | |
1377 | |
1378 /* If this insn was already done, we don't need it. */ | |
1379 if (redundant_insn (trial, insn, delay_list)) | |
1380 { | |
1381 delete_from_delay_slot (trial); | |
1382 continue; | |
1383 } | |
1384 | |
1385 if (! must_annul | |
1386 && ((condition == const_true_rtx | |
1387 || (! insn_sets_resource_p (trial, other_needed, 0) | |
1388 && ! may_trap_or_fault_p (PATTERN (trial))))) | |
1389 ? eligible_for_delay (insn, *pslots_filled, trial, flags) | |
1390 : (must_annul || delay_list == NULL) && (must_annul = 1, | |
1391 check_annul_list_true_false (1, delay_list) | |
1392 && eligible_for_annul_true (insn, *pslots_filled, trial, flags))) | |
1393 { | |
1394 if (must_annul) | |
1395 used_annul = 1; | |
1396 delete_from_delay_slot (trial); | |
1397 delay_list = add_to_delay_list (trial, delay_list); | |
1398 | |
1399 if (++(*pslots_filled) == slots_to_fill) | |
1400 break; | |
1401 } | |
1402 else | |
1403 break; | |
1404 } | |
1405 | |
1406 if (used_annul) | |
1407 *pannul_p = 1; | |
1408 return delay_list; | |
1409 } | |
1410 | |
1411 /* Try merging insns starting at THREAD which match exactly the insns in | |
1412 INSN's delay list. | |
1413 | |
1414 If all insns were matched and the insn was previously annulling, the | |
1415 annul bit will be cleared. | |
1416 | |
1417 For each insn that is merged, if the branch is or will be non-annulling, | |
1418 we delete the merged insn. */ | |
1419 | |
1420 static void | |
1421 try_merge_delay_insns (rtx insn, rtx thread) | |
1422 { | |
1423 rtx trial, next_trial; | |
1424 rtx delay_insn = XVECEXP (PATTERN (insn), 0, 0); | |
1425 int annul_p = INSN_ANNULLED_BRANCH_P (delay_insn); | |
1426 int slot_number = 1; | |
1427 int num_slots = XVECLEN (PATTERN (insn), 0); | |
1428 rtx next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
1429 struct resources set, needed; | |
1430 rtx merged_insns = 0; | |
1431 int i; | |
1432 int flags; | |
1433 | |
1434 flags = get_jump_flags (delay_insn, JUMP_LABEL (delay_insn)); | |
1435 | |
1436 CLEAR_RESOURCE (&needed); | |
1437 CLEAR_RESOURCE (&set); | |
1438 | |
1439 /* If this is not an annulling branch, take into account anything needed in | |
1440 INSN's delay slot. This prevents two increments from being incorrectly | |
1441 folded into one. If we are annulling, this would be the correct | |
1442 thing to do. (The alternative, looking at things set in NEXT_TO_MATCH | |
1443 will essentially disable this optimization. This method is somewhat of | |
1444 a kludge, but I don't see a better way.) */ | |
1445 if (! annul_p) | |
1446 for (i = 1 ; i < num_slots; i++) | |
1447 if (XVECEXP (PATTERN (insn), 0, i)) | |
1448 mark_referenced_resources (XVECEXP (PATTERN (insn), 0, i), &needed, 1); | |
1449 | |
1450 for (trial = thread; !stop_search_p (trial, 1); trial = next_trial) | |
1451 { | |
1452 rtx pat = PATTERN (trial); | |
1453 rtx oldtrial = trial; | |
1454 | |
1455 next_trial = next_nonnote_insn (trial); | |
1456 | |
1457 /* TRIAL must be a CALL_INSN or INSN. Skip USE and CLOBBER. */ | |
1458 if (NONJUMP_INSN_P (trial) | |
1459 && (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)) | |
1460 continue; | |
1461 | |
1462 if (GET_CODE (next_to_match) == GET_CODE (trial) | |
1463 #ifdef HAVE_cc0 | |
1464 /* We can't share an insn that sets cc0. */ | |
1465 && ! sets_cc0_p (pat) | |
1466 #endif | |
1467 && ! insn_references_resource_p (trial, &set, 1) | |
1468 && ! insn_sets_resource_p (trial, &set, 1) | |
1469 && ! insn_sets_resource_p (trial, &needed, 1) | |
1470 && (trial = try_split (pat, trial, 0)) != 0 | |
1471 /* Update next_trial, in case try_split succeeded. */ | |
1472 && (next_trial = next_nonnote_insn (trial)) | |
1473 /* Likewise THREAD. */ | |
1474 && (thread = oldtrial == thread ? trial : thread) | |
1475 && rtx_equal_p (PATTERN (next_to_match), PATTERN (trial)) | |
1476 /* Have to test this condition if annul condition is different | |
1477 from (and less restrictive than) non-annulling one. */ | |
1478 && eligible_for_delay (delay_insn, slot_number - 1, trial, flags)) | |
1479 { | |
1480 | |
1481 if (! annul_p) | |
1482 { | |
1483 update_block (trial, thread); | |
1484 if (trial == thread) | |
1485 thread = next_active_insn (thread); | |
1486 | |
1487 delete_related_insns (trial); | |
1488 INSN_FROM_TARGET_P (next_to_match) = 0; | |
1489 } | |
1490 else | |
1491 merged_insns = gen_rtx_INSN_LIST (VOIDmode, trial, merged_insns); | |
1492 | |
1493 if (++slot_number == num_slots) | |
1494 break; | |
1495 | |
1496 next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
1497 } | |
1498 | |
1499 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); | |
1500 mark_referenced_resources (trial, &needed, 1); | |
1501 } | |
1502 | |
1503 /* See if we stopped on a filled insn. If we did, try to see if its | |
1504 delay slots match. */ | |
1505 if (slot_number != num_slots | |
1506 && trial && NONJUMP_INSN_P (trial) | |
1507 && GET_CODE (PATTERN (trial)) == SEQUENCE | |
1508 && ! INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (trial), 0, 0))) | |
1509 { | |
1510 rtx pat = PATTERN (trial); | |
1511 rtx filled_insn = XVECEXP (pat, 0, 0); | |
1512 | |
1513 /* Account for resources set/needed by the filled insn. */ | |
1514 mark_set_resources (filled_insn, &set, 0, MARK_SRC_DEST_CALL); | |
1515 mark_referenced_resources (filled_insn, &needed, 1); | |
1516 | |
1517 for (i = 1; i < XVECLEN (pat, 0); i++) | |
1518 { | |
1519 rtx dtrial = XVECEXP (pat, 0, i); | |
1520 | |
1521 if (! insn_references_resource_p (dtrial, &set, 1) | |
1522 && ! insn_sets_resource_p (dtrial, &set, 1) | |
1523 && ! insn_sets_resource_p (dtrial, &needed, 1) | |
1524 #ifdef HAVE_cc0 | |
1525 && ! sets_cc0_p (PATTERN (dtrial)) | |
1526 #endif | |
1527 && rtx_equal_p (PATTERN (next_to_match), PATTERN (dtrial)) | |
1528 && eligible_for_delay (delay_insn, slot_number - 1, dtrial, flags)) | |
1529 { | |
1530 if (! annul_p) | |
1531 { | |
1532 rtx new_rtx; | |
1533 | |
1534 update_block (dtrial, thread); | |
1535 new_rtx = delete_from_delay_slot (dtrial); | |
1536 if (INSN_DELETED_P (thread)) | |
1537 thread = new_rtx; | |
1538 INSN_FROM_TARGET_P (next_to_match) = 0; | |
1539 } | |
1540 else | |
1541 merged_insns = gen_rtx_INSN_LIST (SImode, dtrial, | |
1542 merged_insns); | |
1543 | |
1544 if (++slot_number == num_slots) | |
1545 break; | |
1546 | |
1547 next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
1548 } | |
1549 else | |
1550 { | |
1551 /* Keep track of the set/referenced resources for the delay | |
1552 slots of any trial insns we encounter. */ | |
1553 mark_set_resources (dtrial, &set, 0, MARK_SRC_DEST_CALL); | |
1554 mark_referenced_resources (dtrial, &needed, 1); | |
1555 } | |
1556 } | |
1557 } | |
1558 | |
1559 /* If all insns in the delay slot have been matched and we were previously | |
1560 annulling the branch, we need not any more. In that case delete all the | |
1561 merged insns. Also clear the INSN_FROM_TARGET_P bit of each insn in | |
1562 the delay list so that we know that it isn't only being used at the | |
1563 target. */ | |
1564 if (slot_number == num_slots && annul_p) | |
1565 { | |
1566 for (; merged_insns; merged_insns = XEXP (merged_insns, 1)) | |
1567 { | |
1568 if (GET_MODE (merged_insns) == SImode) | |
1569 { | |
1570 rtx new_rtx; | |
1571 | |
1572 update_block (XEXP (merged_insns, 0), thread); | |
1573 new_rtx = delete_from_delay_slot (XEXP (merged_insns, 0)); | |
1574 if (INSN_DELETED_P (thread)) | |
1575 thread = new_rtx; | |
1576 } | |
1577 else | |
1578 { | |
1579 update_block (XEXP (merged_insns, 0), thread); | |
1580 delete_related_insns (XEXP (merged_insns, 0)); | |
1581 } | |
1582 } | |
1583 | |
1584 INSN_ANNULLED_BRANCH_P (delay_insn) = 0; | |
1585 | |
1586 for (i = 0; i < XVECLEN (PATTERN (insn), 0); i++) | |
1587 INSN_FROM_TARGET_P (XVECEXP (PATTERN (insn), 0, i)) = 0; | |
1588 } | |
1589 } | |
1590 | |
1591 /* See if INSN is redundant with an insn in front of TARGET. Often this | |
1592 is called when INSN is a candidate for a delay slot of TARGET. | |
1593 DELAY_LIST are insns that will be placed in delay slots of TARGET in front | |
1594 of INSN. Often INSN will be redundant with an insn in a delay slot of | |
1595 some previous insn. This happens when we have a series of branches to the | |
1596 same label; in that case the first insn at the target might want to go | |
1597 into each of the delay slots. | |
1598 | |
1599 If we are not careful, this routine can take up a significant fraction | |
1600 of the total compilation time (4%), but only wins rarely. Hence we | |
1601 speed this routine up by making two passes. The first pass goes back | |
1602 until it hits a label and sees if it finds an insn with an identical | |
1603 pattern. Only in this (relatively rare) event does it check for | |
1604 data conflicts. | |
1605 | |
1606 We do not split insns we encounter. This could cause us not to find a | |
1607 redundant insn, but the cost of splitting seems greater than the possible | |
1608 gain in rare cases. */ | |
1609 | |
1610 static rtx | |
1611 redundant_insn (rtx insn, rtx target, rtx delay_list) | |
1612 { | |
1613 rtx target_main = target; | |
1614 rtx ipat = PATTERN (insn); | |
1615 rtx trial, pat; | |
1616 struct resources needed, set; | |
1617 int i; | |
1618 unsigned insns_to_search; | |
1619 | |
1620 /* If INSN has any REG_UNUSED notes, it can't match anything since we | |
1621 are allowed to not actually assign to such a register. */ | |
1622 if (find_reg_note (insn, REG_UNUSED, NULL_RTX) != 0) | |
1623 return 0; | |
1624 | |
1625 /* Scan backwards looking for a match. */ | |
1626 for (trial = PREV_INSN (target), | |
1627 insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH; | |
1628 trial && insns_to_search > 0; | |
1629 trial = PREV_INSN (trial), --insns_to_search) | |
1630 { | |
1631 if (LABEL_P (trial)) | |
1632 return 0; | |
1633 | |
1634 if (! INSN_P (trial)) | |
1635 continue; | |
1636 | |
1637 pat = PATTERN (trial); | |
1638 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
1639 continue; | |
1640 | |
1641 if (GET_CODE (pat) == SEQUENCE) | |
1642 { | |
1643 /* Stop for a CALL and its delay slots because it is difficult to | |
1644 track its resource needs correctly. */ | |
1645 if (CALL_P (XVECEXP (pat, 0, 0))) | |
1646 return 0; | |
1647 | |
1648 /* Stop for an INSN or JUMP_INSN with delayed effects and its delay | |
1649 slots because it is difficult to track its resource needs | |
1650 correctly. */ | |
1651 | |
1652 #ifdef INSN_SETS_ARE_DELAYED | |
1653 if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
1654 return 0; | |
1655 #endif | |
1656 | |
1657 #ifdef INSN_REFERENCES_ARE_DELAYED | |
1658 if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
1659 return 0; | |
1660 #endif | |
1661 | |
1662 /* See if any of the insns in the delay slot match, updating | |
1663 resource requirements as we go. */ | |
1664 for (i = XVECLEN (pat, 0) - 1; i > 0; i--) | |
1665 if (GET_CODE (XVECEXP (pat, 0, i)) == GET_CODE (insn) | |
1666 && rtx_equal_p (PATTERN (XVECEXP (pat, 0, i)), ipat) | |
1667 && ! find_reg_note (XVECEXP (pat, 0, i), REG_UNUSED, NULL_RTX)) | |
1668 break; | |
1669 | |
1670 /* If found a match, exit this loop early. */ | |
1671 if (i > 0) | |
1672 break; | |
1673 } | |
1674 | |
1675 else if (GET_CODE (trial) == GET_CODE (insn) && rtx_equal_p (pat, ipat) | |
1676 && ! find_reg_note (trial, REG_UNUSED, NULL_RTX)) | |
1677 break; | |
1678 } | |
1679 | |
1680 /* If we didn't find an insn that matches, return 0. */ | |
1681 if (trial == 0) | |
1682 return 0; | |
1683 | |
1684 /* See what resources this insn sets and needs. If they overlap, or | |
1685 if this insn references CC0, it can't be redundant. */ | |
1686 | |
1687 CLEAR_RESOURCE (&needed); | |
1688 CLEAR_RESOURCE (&set); | |
1689 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); | |
1690 mark_referenced_resources (insn, &needed, 1); | |
1691 | |
1692 /* If TARGET is a SEQUENCE, get the main insn. */ | |
1693 if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE) | |
1694 target_main = XVECEXP (PATTERN (target), 0, 0); | |
1695 | |
1696 if (resource_conflicts_p (&needed, &set) | |
1697 #ifdef HAVE_cc0 | |
1698 || reg_mentioned_p (cc0_rtx, ipat) | |
1699 #endif | |
1700 /* The insn requiring the delay may not set anything needed or set by | |
1701 INSN. */ | |
1702 || insn_sets_resource_p (target_main, &needed, 1) | |
1703 || insn_sets_resource_p (target_main, &set, 1)) | |
1704 return 0; | |
1705 | |
1706 /* Insns we pass may not set either NEEDED or SET, so merge them for | |
1707 simpler tests. */ | |
1708 needed.memory |= set.memory; | |
1709 needed.unch_memory |= set.unch_memory; | |
1710 IOR_HARD_REG_SET (needed.regs, set.regs); | |
1711 | |
1712 /* This insn isn't redundant if it conflicts with an insn that either is | |
1713 or will be in a delay slot of TARGET. */ | |
1714 | |
1715 while (delay_list) | |
1716 { | |
1717 if (insn_sets_resource_p (XEXP (delay_list, 0), &needed, 1)) | |
1718 return 0; | |
1719 delay_list = XEXP (delay_list, 1); | |
1720 } | |
1721 | |
1722 if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE) | |
1723 for (i = 1; i < XVECLEN (PATTERN (target), 0); i++) | |
1724 if (insn_sets_resource_p (XVECEXP (PATTERN (target), 0, i), &needed, 1)) | |
1725 return 0; | |
1726 | |
1727 /* Scan backwards until we reach a label or an insn that uses something | |
1728 INSN sets or sets something insn uses or sets. */ | |
1729 | |
1730 for (trial = PREV_INSN (target), | |
1731 insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH; | |
1732 trial && !LABEL_P (trial) && insns_to_search > 0; | |
1733 trial = PREV_INSN (trial), --insns_to_search) | |
1734 { | |
1735 if (!INSN_P (trial)) | |
1736 continue; | |
1737 | |
1738 pat = PATTERN (trial); | |
1739 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
1740 continue; | |
1741 | |
1742 if (GET_CODE (pat) == SEQUENCE) | |
1743 { | |
1744 /* If this is a CALL_INSN and its delay slots, it is hard to track | |
1745 the resource needs properly, so give up. */ | |
1746 if (CALL_P (XVECEXP (pat, 0, 0))) | |
1747 return 0; | |
1748 | |
1749 /* If this is an INSN or JUMP_INSN with delayed effects, it | |
1750 is hard to track the resource needs properly, so give up. */ | |
1751 | |
1752 #ifdef INSN_SETS_ARE_DELAYED | |
1753 if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
1754 return 0; | |
1755 #endif | |
1756 | |
1757 #ifdef INSN_REFERENCES_ARE_DELAYED | |
1758 if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
1759 return 0; | |
1760 #endif | |
1761 | |
1762 /* See if any of the insns in the delay slot match, updating | |
1763 resource requirements as we go. */ | |
1764 for (i = XVECLEN (pat, 0) - 1; i > 0; i--) | |
1765 { | |
1766 rtx candidate = XVECEXP (pat, 0, i); | |
1767 | |
1768 /* If an insn will be annulled if the branch is false, it isn't | |
1769 considered as a possible duplicate insn. */ | |
1770 if (rtx_equal_p (PATTERN (candidate), ipat) | |
1771 && ! (INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0)) | |
1772 && INSN_FROM_TARGET_P (candidate))) | |
1773 { | |
1774 /* Show that this insn will be used in the sequel. */ | |
1775 INSN_FROM_TARGET_P (candidate) = 0; | |
1776 return candidate; | |
1777 } | |
1778 | |
1779 /* Unless this is an annulled insn from the target of a branch, | |
1780 we must stop if it sets anything needed or set by INSN. */ | |
1781 if ((! INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0)) | |
1782 || ! INSN_FROM_TARGET_P (candidate)) | |
1783 && insn_sets_resource_p (candidate, &needed, 1)) | |
1784 return 0; | |
1785 } | |
1786 | |
1787 /* If the insn requiring the delay slot conflicts with INSN, we | |
1788 must stop. */ | |
1789 if (insn_sets_resource_p (XVECEXP (pat, 0, 0), &needed, 1)) | |
1790 return 0; | |
1791 } | |
1792 else | |
1793 { | |
1794 /* See if TRIAL is the same as INSN. */ | |
1795 pat = PATTERN (trial); | |
1796 if (rtx_equal_p (pat, ipat)) | |
1797 return trial; | |
1798 | |
1799 /* Can't go any further if TRIAL conflicts with INSN. */ | |
1800 if (insn_sets_resource_p (trial, &needed, 1)) | |
1801 return 0; | |
1802 } | |
1803 } | |
1804 | |
1805 return 0; | |
1806 } | |
1807 | |
1808 /* Return 1 if THREAD can only be executed in one way. If LABEL is nonzero, | |
1809 it is the target of the branch insn being scanned. If ALLOW_FALLTHROUGH | |
1810 is nonzero, we are allowed to fall into this thread; otherwise, we are | |
1811 not. | |
1812 | |
1813 If LABEL is used more than one or we pass a label other than LABEL before | |
1814 finding an active insn, we do not own this thread. */ | |
1815 | |
1816 static int | |
1817 own_thread_p (rtx thread, rtx label, int allow_fallthrough) | |
1818 { | |
1819 rtx active_insn; | |
1820 rtx insn; | |
1821 | |
1822 /* We don't own the function end. */ | |
1823 if (thread == 0) | |
1824 return 0; | |
1825 | |
1826 /* Get the first active insn, or THREAD, if it is an active insn. */ | |
1827 active_insn = next_active_insn (PREV_INSN (thread)); | |
1828 | |
1829 for (insn = thread; insn != active_insn; insn = NEXT_INSN (insn)) | |
1830 if (LABEL_P (insn) | |
1831 && (insn != label || LABEL_NUSES (insn) != 1)) | |
1832 return 0; | |
1833 | |
1834 if (allow_fallthrough) | |
1835 return 1; | |
1836 | |
1837 /* Ensure that we reach a BARRIER before any insn or label. */ | |
1838 for (insn = prev_nonnote_insn (thread); | |
1839 insn == 0 || !BARRIER_P (insn); | |
1840 insn = prev_nonnote_insn (insn)) | |
1841 if (insn == 0 | |
1842 || LABEL_P (insn) | |
1843 || (NONJUMP_INSN_P (insn) | |
1844 && GET_CODE (PATTERN (insn)) != USE | |
1845 && GET_CODE (PATTERN (insn)) != CLOBBER)) | |
1846 return 0; | |
1847 | |
1848 return 1; | |
1849 } | |
1850 | |
1851 /* Called when INSN is being moved from a location near the target of a jump. | |
1852 We leave a marker of the form (use (INSN)) immediately in front | |
1853 of WHERE for mark_target_live_regs. These markers will be deleted when | |
1854 reorg finishes. | |
1855 | |
1856 We used to try to update the live status of registers if WHERE is at | |
1857 the start of a basic block, but that can't work since we may remove a | |
1858 BARRIER in relax_delay_slots. */ | |
1859 | |
1860 static void | |
1861 update_block (rtx insn, rtx where) | |
1862 { | |
1863 /* Ignore if this was in a delay slot and it came from the target of | |
1864 a branch. */ | |
1865 if (INSN_FROM_TARGET_P (insn)) | |
1866 return; | |
1867 | |
1868 emit_insn_before (gen_rtx_USE (VOIDmode, insn), where); | |
1869 | |
1870 /* INSN might be making a value live in a block where it didn't use to | |
1871 be. So recompute liveness information for this block. */ | |
1872 | |
1873 incr_ticks_for_insn (insn); | |
1874 } | |
1875 | |
1876 /* Similar to REDIRECT_JUMP except that we update the BB_TICKS entry for | |
1877 the basic block containing the jump. */ | |
1878 | |
1879 static int | |
1880 reorg_redirect_jump (rtx jump, rtx nlabel) | |
1881 { | |
1882 incr_ticks_for_insn (jump); | |
1883 return redirect_jump (jump, nlabel, 1); | |
1884 } | |
1885 | |
1886 /* Called when INSN is being moved forward into a delay slot of DELAYED_INSN. | |
1887 We check every instruction between INSN and DELAYED_INSN for REG_DEAD notes | |
1888 that reference values used in INSN. If we find one, then we move the | |
1889 REG_DEAD note to INSN. | |
1890 | |
1891 This is needed to handle the case where a later insn (after INSN) has a | |
1892 REG_DEAD note for a register used by INSN, and this later insn subsequently | |
1893 gets moved before a CODE_LABEL because it is a redundant insn. In this | |
1894 case, mark_target_live_regs may be confused into thinking the register | |
1895 is dead because it sees a REG_DEAD note immediately before a CODE_LABEL. */ | |
1896 | |
1897 static void | |
1898 update_reg_dead_notes (rtx insn, rtx delayed_insn) | |
1899 { | |
1900 rtx p, link, next; | |
1901 | |
1902 for (p = next_nonnote_insn (insn); p != delayed_insn; | |
1903 p = next_nonnote_insn (p)) | |
1904 for (link = REG_NOTES (p); link; link = next) | |
1905 { | |
1906 next = XEXP (link, 1); | |
1907 | |
1908 if (REG_NOTE_KIND (link) != REG_DEAD | |
1909 || !REG_P (XEXP (link, 0))) | |
1910 continue; | |
1911 | |
1912 if (reg_referenced_p (XEXP (link, 0), PATTERN (insn))) | |
1913 { | |
1914 /* Move the REG_DEAD note from P to INSN. */ | |
1915 remove_note (p, link); | |
1916 XEXP (link, 1) = REG_NOTES (insn); | |
1917 REG_NOTES (insn) = link; | |
1918 } | |
1919 } | |
1920 } | |
1921 | |
1922 /* Called when an insn redundant with start_insn is deleted. If there | |
1923 is a REG_DEAD note for the target of start_insn between start_insn | |
1924 and stop_insn, then the REG_DEAD note needs to be deleted since the | |
1925 value no longer dies there. | |
1926 | |
1927 If the REG_DEAD note isn't deleted, then mark_target_live_regs may be | |
1928 confused into thinking the register is dead. */ | |
1929 | |
1930 static void | |
1931 fix_reg_dead_note (rtx start_insn, rtx stop_insn) | |
1932 { | |
1933 rtx p, link, next; | |
1934 | |
1935 for (p = next_nonnote_insn (start_insn); p != stop_insn; | |
1936 p = next_nonnote_insn (p)) | |
1937 for (link = REG_NOTES (p); link; link = next) | |
1938 { | |
1939 next = XEXP (link, 1); | |
1940 | |
1941 if (REG_NOTE_KIND (link) != REG_DEAD | |
1942 || !REG_P (XEXP (link, 0))) | |
1943 continue; | |
1944 | |
1945 if (reg_set_p (XEXP (link, 0), PATTERN (start_insn))) | |
1946 { | |
1947 remove_note (p, link); | |
1948 return; | |
1949 } | |
1950 } | |
1951 } | |
1952 | |
1953 /* Delete any REG_UNUSED notes that exist on INSN but not on REDUNDANT_INSN. | |
1954 | |
1955 This handles the case of udivmodXi4 instructions which optimize their | |
1956 output depending on whether any REG_UNUSED notes are present. | |
1957 we must make sure that INSN calculates as many results as REDUNDANT_INSN | |
1958 does. */ | |
1959 | |
1960 static void | |
1961 update_reg_unused_notes (rtx insn, rtx redundant_insn) | |
1962 { | |
1963 rtx link, next; | |
1964 | |
1965 for (link = REG_NOTES (insn); link; link = next) | |
1966 { | |
1967 next = XEXP (link, 1); | |
1968 | |
1969 if (REG_NOTE_KIND (link) != REG_UNUSED | |
1970 || !REG_P (XEXP (link, 0))) | |
1971 continue; | |
1972 | |
1973 if (! find_regno_note (redundant_insn, REG_UNUSED, | |
1974 REGNO (XEXP (link, 0)))) | |
1975 remove_note (insn, link); | |
1976 } | |
1977 } | |
1978 | |
1979 /* Return the label before INSN, or put a new label there. */ | |
1980 | |
1981 static rtx | |
1982 get_label_before (rtx insn) | |
1983 { | |
1984 rtx label; | |
1985 | |
1986 /* Find an existing label at this point | |
1987 or make a new one if there is none. */ | |
1988 label = prev_nonnote_insn (insn); | |
1989 | |
1990 if (label == 0 || !LABEL_P (label)) | |
1991 { | |
1992 rtx prev = PREV_INSN (insn); | |
1993 | |
1994 label = gen_label_rtx (); | |
1995 emit_label_after (label, prev); | |
1996 LABEL_NUSES (label) = 0; | |
1997 } | |
1998 return label; | |
1999 } | |
2000 | |
2001 /* Scan a function looking for insns that need a delay slot and find insns to | |
2002 put into the delay slot. | |
2003 | |
2004 NON_JUMPS_P is nonzero if we are to only try to fill non-jump insns (such | |
2005 as calls). We do these first since we don't want jump insns (that are | |
2006 easier to fill) to get the only insns that could be used for non-jump insns. | |
2007 When it is zero, only try to fill JUMP_INSNs. | |
2008 | |
2009 When slots are filled in this manner, the insns (including the | |
2010 delay_insn) are put together in a SEQUENCE rtx. In this fashion, | |
2011 it is possible to tell whether a delay slot has really been filled | |
2012 or not. `final' knows how to deal with this, by communicating | |
2013 through FINAL_SEQUENCE. */ | |
2014 | |
2015 static void | |
2016 fill_simple_delay_slots (int non_jumps_p) | |
2017 { | |
2018 rtx insn, pat, trial, next_trial; | |
2019 int i; | |
2020 int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base; | |
2021 struct resources needed, set; | |
2022 int slots_to_fill, slots_filled; | |
2023 rtx delay_list; | |
2024 | |
2025 for (i = 0; i < num_unfilled_slots; i++) | |
2026 { | |
2027 int flags; | |
2028 /* Get the next insn to fill. If it has already had any slots assigned, | |
2029 we can't do anything with it. Maybe we'll improve this later. */ | |
2030 | |
2031 insn = unfilled_slots_base[i]; | |
2032 if (insn == 0 | |
2033 || INSN_DELETED_P (insn) | |
2034 || (NONJUMP_INSN_P (insn) | |
2035 && GET_CODE (PATTERN (insn)) == SEQUENCE) | |
2036 || (JUMP_P (insn) && non_jumps_p) | |
2037 || (!JUMP_P (insn) && ! non_jumps_p)) | |
2038 continue; | |
2039 | |
2040 /* It may have been that this insn used to need delay slots, but | |
2041 now doesn't; ignore in that case. This can happen, for example, | |
2042 on the HP PA RISC, where the number of delay slots depends on | |
2043 what insns are nearby. */ | |
2044 slots_to_fill = num_delay_slots (insn); | |
2045 | |
2046 /* Some machine description have defined instructions to have | |
2047 delay slots only in certain circumstances which may depend on | |
2048 nearby insns (which change due to reorg's actions). | |
2049 | |
2050 For example, the PA port normally has delay slots for unconditional | |
2051 jumps. | |
2052 | |
2053 However, the PA port claims such jumps do not have a delay slot | |
2054 if they are immediate successors of certain CALL_INSNs. This | |
2055 allows the port to favor filling the delay slot of the call with | |
2056 the unconditional jump. */ | |
2057 if (slots_to_fill == 0) | |
2058 continue; | |
2059 | |
2060 /* This insn needs, or can use, some delay slots. SLOTS_TO_FILL | |
2061 says how many. After initialization, first try optimizing | |
2062 | |
2063 call _foo call _foo | |
2064 nop add %o7,.-L1,%o7 | |
2065 b,a L1 | |
2066 nop | |
2067 | |
2068 If this case applies, the delay slot of the call is filled with | |
2069 the unconditional jump. This is done first to avoid having the | |
2070 delay slot of the call filled in the backward scan. Also, since | |
2071 the unconditional jump is likely to also have a delay slot, that | |
2072 insn must exist when it is subsequently scanned. | |
2073 | |
2074 This is tried on each insn with delay slots as some machines | |
2075 have insns which perform calls, but are not represented as | |
2076 CALL_INSNs. */ | |
2077 | |
2078 slots_filled = 0; | |
2079 delay_list = 0; | |
2080 | |
2081 if (JUMP_P (insn)) | |
2082 flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
2083 else | |
2084 flags = get_jump_flags (insn, NULL_RTX); | |
2085 | |
2086 if ((trial = next_active_insn (insn)) | |
2087 && JUMP_P (trial) | |
2088 && simplejump_p (trial) | |
2089 && eligible_for_delay (insn, slots_filled, trial, flags) | |
2090 && no_labels_between_p (insn, trial) | |
2091 && ! can_throw_internal (trial)) | |
2092 { | |
2093 rtx *tmp; | |
2094 slots_filled++; | |
2095 delay_list = add_to_delay_list (trial, delay_list); | |
2096 | |
2097 /* TRIAL may have had its delay slot filled, then unfilled. When | |
2098 the delay slot is unfilled, TRIAL is placed back on the unfilled | |
2099 slots obstack. Unfortunately, it is placed on the end of the | |
2100 obstack, not in its original location. Therefore, we must search | |
2101 from entry i + 1 to the end of the unfilled slots obstack to | |
2102 try and find TRIAL. */ | |
2103 tmp = &unfilled_slots_base[i + 1]; | |
2104 while (*tmp != trial && tmp != unfilled_slots_next) | |
2105 tmp++; | |
2106 | |
2107 /* Remove the unconditional jump from consideration for delay slot | |
2108 filling and unthread it. */ | |
2109 if (*tmp == trial) | |
2110 *tmp = 0; | |
2111 { | |
2112 rtx next = NEXT_INSN (trial); | |
2113 rtx prev = PREV_INSN (trial); | |
2114 if (prev) | |
2115 NEXT_INSN (prev) = next; | |
2116 if (next) | |
2117 PREV_INSN (next) = prev; | |
2118 } | |
2119 } | |
2120 | |
2121 /* Now, scan backwards from the insn to search for a potential | |
2122 delay-slot candidate. Stop searching when a label or jump is hit. | |
2123 | |
2124 For each candidate, if it is to go into the delay slot (moved | |
2125 forward in execution sequence), it must not need or set any resources | |
2126 that were set by later insns and must not set any resources that | |
2127 are needed for those insns. | |
2128 | |
2129 The delay slot insn itself sets resources unless it is a call | |
2130 (in which case the called routine, not the insn itself, is doing | |
2131 the setting). */ | |
2132 | |
2133 if (slots_filled < slots_to_fill) | |
2134 { | |
2135 CLEAR_RESOURCE (&needed); | |
2136 CLEAR_RESOURCE (&set); | |
2137 mark_set_resources (insn, &set, 0, MARK_SRC_DEST); | |
2138 mark_referenced_resources (insn, &needed, 0); | |
2139 | |
2140 for (trial = prev_nonnote_insn (insn); ! stop_search_p (trial, 1); | |
2141 trial = next_trial) | |
2142 { | |
2143 next_trial = prev_nonnote_insn (trial); | |
2144 | |
2145 /* This must be an INSN or CALL_INSN. */ | |
2146 pat = PATTERN (trial); | |
2147 | |
2148 /* USE and CLOBBER at this level was just for flow; ignore it. */ | |
2149 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2150 continue; | |
2151 | |
2152 /* Check for resource conflict first, to avoid unnecessary | |
2153 splitting. */ | |
2154 if (! insn_references_resource_p (trial, &set, 1) | |
2155 && ! insn_sets_resource_p (trial, &set, 1) | |
2156 && ! insn_sets_resource_p (trial, &needed, 1) | |
2157 #ifdef HAVE_cc0 | |
2158 /* Can't separate set of cc0 from its use. */ | |
2159 && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat)) | |
2160 #endif | |
2161 && ! can_throw_internal (trial)) | |
2162 { | |
2163 trial = try_split (pat, trial, 1); | |
2164 next_trial = prev_nonnote_insn (trial); | |
2165 if (eligible_for_delay (insn, slots_filled, trial, flags)) | |
2166 { | |
2167 /* In this case, we are searching backward, so if we | |
2168 find insns to put on the delay list, we want | |
2169 to put them at the head, rather than the | |
2170 tail, of the list. */ | |
2171 | |
2172 update_reg_dead_notes (trial, insn); | |
2173 delay_list = gen_rtx_INSN_LIST (VOIDmode, | |
2174 trial, delay_list); | |
2175 update_block (trial, trial); | |
2176 delete_related_insns (trial); | |
2177 if (slots_to_fill == ++slots_filled) | |
2178 break; | |
2179 continue; | |
2180 } | |
2181 } | |
2182 | |
2183 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); | |
2184 mark_referenced_resources (trial, &needed, 1); | |
2185 } | |
2186 } | |
2187 | |
2188 /* If all needed slots haven't been filled, we come here. */ | |
2189 | |
2190 /* Try to optimize case of jumping around a single insn. */ | |
2191 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) | |
2192 if (slots_filled != slots_to_fill | |
2193 && delay_list == 0 | |
2194 && JUMP_P (insn) | |
2195 && (condjump_p (insn) || condjump_in_parallel_p (insn))) | |
2196 { | |
2197 delay_list = optimize_skip (insn); | |
2198 if (delay_list) | |
2199 slots_filled += 1; | |
2200 } | |
2201 #endif | |
2202 | |
2203 /* Try to get insns from beyond the insn needing the delay slot. | |
2204 These insns can neither set or reference resources set in insns being | |
2205 skipped, cannot set resources in the insn being skipped, and, if this | |
2206 is a CALL_INSN (or a CALL_INSN is passed), cannot trap (because the | |
2207 call might not return). | |
2208 | |
2209 There used to be code which continued past the target label if | |
2210 we saw all uses of the target label. This code did not work, | |
2211 because it failed to account for some instructions which were | |
2212 both annulled and marked as from the target. This can happen as a | |
2213 result of optimize_skip. Since this code was redundant with | |
2214 fill_eager_delay_slots anyways, it was just deleted. */ | |
2215 | |
2216 if (slots_filled != slots_to_fill | |
2217 /* If this instruction could throw an exception which is | |
2218 caught in the same function, then it's not safe to fill | |
2219 the delay slot with an instruction from beyond this | |
2220 point. For example, consider: | |
2221 | |
2222 int i = 2; | |
2223 | |
2224 try { | |
2225 f(); | |
2226 i = 3; | |
2227 } catch (...) {} | |
2228 | |
2229 return i; | |
2230 | |
2231 Even though `i' is a local variable, we must be sure not | |
2232 to put `i = 3' in the delay slot if `f' might throw an | |
2233 exception. | |
2234 | |
2235 Presumably, we should also check to see if we could get | |
2236 back to this function via `setjmp'. */ | |
2237 && ! can_throw_internal (insn) | |
2238 && (!JUMP_P (insn) | |
2239 || ((condjump_p (insn) || condjump_in_parallel_p (insn)) | |
2240 && ! simplejump_p (insn) | |
2241 && JUMP_LABEL (insn) != 0))) | |
2242 { | |
2243 /* Invariant: If insn is a JUMP_INSN, the insn's jump | |
2244 label. Otherwise, zero. */ | |
2245 rtx target = 0; | |
2246 int maybe_never = 0; | |
2247 rtx pat, trial_delay; | |
2248 | |
2249 CLEAR_RESOURCE (&needed); | |
2250 CLEAR_RESOURCE (&set); | |
2251 | |
2252 if (CALL_P (insn)) | |
2253 { | |
2254 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); | |
2255 mark_referenced_resources (insn, &needed, 1); | |
2256 maybe_never = 1; | |
2257 } | |
2258 else | |
2259 { | |
2260 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); | |
2261 mark_referenced_resources (insn, &needed, 1); | |
2262 if (JUMP_P (insn)) | |
2263 target = JUMP_LABEL (insn); | |
2264 } | |
2265 | |
2266 if (target == 0) | |
2267 for (trial = next_nonnote_insn (insn); trial; trial = next_trial) | |
2268 { | |
2269 next_trial = next_nonnote_insn (trial); | |
2270 | |
2271 if (LABEL_P (trial) | |
2272 || BARRIER_P (trial)) | |
2273 break; | |
2274 | |
2275 /* We must have an INSN, JUMP_INSN, or CALL_INSN. */ | |
2276 pat = PATTERN (trial); | |
2277 | |
2278 /* Stand-alone USE and CLOBBER are just for flow. */ | |
2279 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2280 continue; | |
2281 | |
2282 /* If this already has filled delay slots, get the insn needing | |
2283 the delay slots. */ | |
2284 if (GET_CODE (pat) == SEQUENCE) | |
2285 trial_delay = XVECEXP (pat, 0, 0); | |
2286 else | |
2287 trial_delay = trial; | |
2288 | |
2289 /* Stop our search when seeing an unconditional jump. */ | |
2290 if (JUMP_P (trial_delay)) | |
2291 break; | |
2292 | |
2293 /* See if we have a resource problem before we try to | |
2294 split. */ | |
2295 if (GET_CODE (pat) != SEQUENCE | |
2296 && ! insn_references_resource_p (trial, &set, 1) | |
2297 && ! insn_sets_resource_p (trial, &set, 1) | |
2298 && ! insn_sets_resource_p (trial, &needed, 1) | |
2299 #ifdef HAVE_cc0 | |
2300 && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat)) | |
2301 #endif | |
2302 && ! (maybe_never && may_trap_or_fault_p (pat)) | |
2303 && (trial = try_split (pat, trial, 0)) | |
2304 && eligible_for_delay (insn, slots_filled, trial, flags) | |
2305 && ! can_throw_internal(trial)) | |
2306 { | |
2307 next_trial = next_nonnote_insn (trial); | |
2308 delay_list = add_to_delay_list (trial, delay_list); | |
2309 | |
2310 #ifdef HAVE_cc0 | |
2311 if (reg_mentioned_p (cc0_rtx, pat)) | |
2312 link_cc0_insns (trial); | |
2313 #endif | |
2314 | |
2315 delete_related_insns (trial); | |
2316 if (slots_to_fill == ++slots_filled) | |
2317 break; | |
2318 continue; | |
2319 } | |
2320 | |
2321 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); | |
2322 mark_referenced_resources (trial, &needed, 1); | |
2323 | |
2324 /* Ensure we don't put insns between the setting of cc and the | |
2325 comparison by moving a setting of cc into an earlier delay | |
2326 slot since these insns could clobber the condition code. */ | |
2327 set.cc = 1; | |
2328 | |
2329 /* If this is a call or jump, we might not get here. */ | |
2330 if (CALL_P (trial_delay) | |
2331 || JUMP_P (trial_delay)) | |
2332 maybe_never = 1; | |
2333 } | |
2334 | |
2335 /* If there are slots left to fill and our search was stopped by an | |
2336 unconditional branch, try the insn at the branch target. We can | |
2337 redirect the branch if it works. | |
2338 | |
2339 Don't do this if the insn at the branch target is a branch. */ | |
2340 if (slots_to_fill != slots_filled | |
2341 && trial | |
2342 && JUMP_P (trial) | |
2343 && simplejump_p (trial) | |
2344 && (target == 0 || JUMP_LABEL (trial) == target) | |
2345 && (next_trial = next_active_insn (JUMP_LABEL (trial))) != 0 | |
2346 && ! (NONJUMP_INSN_P (next_trial) | |
2347 && GET_CODE (PATTERN (next_trial)) == SEQUENCE) | |
2348 && !JUMP_P (next_trial) | |
2349 && ! insn_references_resource_p (next_trial, &set, 1) | |
2350 && ! insn_sets_resource_p (next_trial, &set, 1) | |
2351 && ! insn_sets_resource_p (next_trial, &needed, 1) | |
2352 #ifdef HAVE_cc0 | |
2353 && ! reg_mentioned_p (cc0_rtx, PATTERN (next_trial)) | |
2354 #endif | |
2355 && ! (maybe_never && may_trap_or_fault_p (PATTERN (next_trial))) | |
2356 && (next_trial = try_split (PATTERN (next_trial), next_trial, 0)) | |
2357 && eligible_for_delay (insn, slots_filled, next_trial, flags) | |
2358 && ! can_throw_internal (trial)) | |
2359 { | |
2360 /* See comment in relax_delay_slots about necessity of using | |
2361 next_real_insn here. */ | |
2362 rtx new_label = next_real_insn (next_trial); | |
2363 | |
2364 if (new_label != 0) | |
2365 new_label = get_label_before (new_label); | |
2366 else | |
2367 new_label = find_end_label (); | |
2368 | |
2369 if (new_label) | |
2370 { | |
2371 delay_list | |
2372 = add_to_delay_list (copy_rtx (next_trial), delay_list); | |
2373 slots_filled++; | |
2374 reorg_redirect_jump (trial, new_label); | |
2375 | |
2376 /* If we merged because we both jumped to the same place, | |
2377 redirect the original insn also. */ | |
2378 if (target) | |
2379 reorg_redirect_jump (insn, new_label); | |
2380 } | |
2381 } | |
2382 } | |
2383 | |
2384 /* If this is an unconditional jump, then try to get insns from the | |
2385 target of the jump. */ | |
2386 if (JUMP_P (insn) | |
2387 && simplejump_p (insn) | |
2388 && slots_filled != slots_to_fill) | |
2389 delay_list | |
2390 = fill_slots_from_thread (insn, const_true_rtx, | |
2391 next_active_insn (JUMP_LABEL (insn)), | |
2392 NULL, 1, 1, | |
2393 own_thread_p (JUMP_LABEL (insn), | |
2394 JUMP_LABEL (insn), 0), | |
2395 slots_to_fill, &slots_filled, | |
2396 delay_list); | |
2397 | |
2398 if (delay_list) | |
2399 unfilled_slots_base[i] | |
2400 = emit_delay_sequence (insn, delay_list, slots_filled); | |
2401 | |
2402 if (slots_to_fill == slots_filled) | |
2403 unfilled_slots_base[i] = 0; | |
2404 | |
2405 note_delay_statistics (slots_filled, 0); | |
2406 } | |
2407 | |
2408 #ifdef DELAY_SLOTS_FOR_EPILOGUE | |
2409 /* See if the epilogue needs any delay slots. Try to fill them if so. | |
2410 The only thing we can do is scan backwards from the end of the | |
2411 function. If we did this in a previous pass, it is incorrect to do it | |
2412 again. */ | |
2413 if (crtl->epilogue_delay_list) | |
2414 return; | |
2415 | |
2416 slots_to_fill = DELAY_SLOTS_FOR_EPILOGUE; | |
2417 if (slots_to_fill == 0) | |
2418 return; | |
2419 | |
2420 slots_filled = 0; | |
2421 CLEAR_RESOURCE (&set); | |
2422 | |
2423 /* The frame pointer and stack pointer are needed at the beginning of | |
2424 the epilogue, so instructions setting them can not be put in the | |
2425 epilogue delay slot. However, everything else needed at function | |
2426 end is safe, so we don't want to use end_of_function_needs here. */ | |
2427 CLEAR_RESOURCE (&needed); | |
2428 if (frame_pointer_needed) | |
2429 { | |
2430 SET_HARD_REG_BIT (needed.regs, FRAME_POINTER_REGNUM); | |
2431 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM | |
2432 SET_HARD_REG_BIT (needed.regs, HARD_FRAME_POINTER_REGNUM); | |
2433 #endif | |
2434 if (! EXIT_IGNORE_STACK | |
2435 || current_function_sp_is_unchanging) | |
2436 SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM); | |
2437 } | |
2438 else | |
2439 SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM); | |
2440 | |
2441 #ifdef EPILOGUE_USES | |
2442 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) | |
2443 { | |
2444 if (EPILOGUE_USES (i)) | |
2445 SET_HARD_REG_BIT (needed.regs, i); | |
2446 } | |
2447 #endif | |
2448 | |
2449 for (trial = get_last_insn (); ! stop_search_p (trial, 1); | |
2450 trial = PREV_INSN (trial)) | |
2451 { | |
2452 if (NOTE_P (trial)) | |
2453 continue; | |
2454 pat = PATTERN (trial); | |
2455 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2456 continue; | |
2457 | |
2458 if (! insn_references_resource_p (trial, &set, 1) | |
2459 && ! insn_sets_resource_p (trial, &needed, 1) | |
2460 && ! insn_sets_resource_p (trial, &set, 1) | |
2461 #ifdef HAVE_cc0 | |
2462 /* Don't want to mess with cc0 here. */ | |
2463 && ! reg_mentioned_p (cc0_rtx, pat) | |
2464 #endif | |
2465 && ! can_throw_internal (trial)) | |
2466 { | |
2467 trial = try_split (pat, trial, 1); | |
2468 if (ELIGIBLE_FOR_EPILOGUE_DELAY (trial, slots_filled)) | |
2469 { | |
2470 /* Here as well we are searching backward, so put the | |
2471 insns we find on the head of the list. */ | |
2472 | |
2473 crtl->epilogue_delay_list | |
2474 = gen_rtx_INSN_LIST (VOIDmode, trial, | |
2475 crtl->epilogue_delay_list); | |
2476 mark_end_of_function_resources (trial, 1); | |
2477 update_block (trial, trial); | |
2478 delete_related_insns (trial); | |
2479 | |
2480 /* Clear deleted bit so final.c will output the insn. */ | |
2481 INSN_DELETED_P (trial) = 0; | |
2482 | |
2483 if (slots_to_fill == ++slots_filled) | |
2484 break; | |
2485 continue; | |
2486 } | |
2487 } | |
2488 | |
2489 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); | |
2490 mark_referenced_resources (trial, &needed, 1); | |
2491 } | |
2492 | |
2493 note_delay_statistics (slots_filled, 0); | |
2494 #endif | |
2495 } | |
2496 | |
2497 /* Follow any unconditional jump at LABEL; | |
2498 return the ultimate label reached by any such chain of jumps. | |
2499 Return null if the chain ultimately leads to a return instruction. | |
2500 If LABEL is not followed by a jump, return LABEL. | |
2501 If the chain loops or we can't find end, return LABEL, | |
2502 since that tells caller to avoid changing the insn. */ | |
2503 | |
2504 static rtx | |
2505 follow_jumps (rtx label) | |
2506 { | |
2507 rtx insn; | |
2508 rtx next; | |
2509 rtx value = label; | |
2510 int depth; | |
2511 | |
2512 for (depth = 0; | |
2513 (depth < 10 | |
2514 && (insn = next_active_insn (value)) != 0 | |
2515 && JUMP_P (insn) | |
2516 && ((JUMP_LABEL (insn) != 0 && any_uncondjump_p (insn) | |
2517 && onlyjump_p (insn)) | |
2518 || GET_CODE (PATTERN (insn)) == RETURN) | |
2519 && (next = NEXT_INSN (insn)) | |
2520 && BARRIER_P (next)); | |
2521 depth++) | |
2522 { | |
2523 rtx tem; | |
2524 | |
2525 /* If we have found a cycle, make the insn jump to itself. */ | |
2526 if (JUMP_LABEL (insn) == label) | |
2527 return label; | |
2528 | |
2529 tem = next_active_insn (JUMP_LABEL (insn)); | |
2530 if (tem && (GET_CODE (PATTERN (tem)) == ADDR_VEC | |
2531 || GET_CODE (PATTERN (tem)) == ADDR_DIFF_VEC)) | |
2532 break; | |
2533 | |
2534 value = JUMP_LABEL (insn); | |
2535 } | |
2536 if (depth == 10) | |
2537 return label; | |
2538 return value; | |
2539 } | |
2540 | |
2541 /* Try to find insns to place in delay slots. | |
2542 | |
2543 INSN is the jump needing SLOTS_TO_FILL delay slots. It tests CONDITION | |
2544 or is an unconditional branch if CONDITION is const_true_rtx. | |
2545 *PSLOTS_FILLED is updated with the number of slots that we have filled. | |
2546 | |
2547 THREAD is a flow-of-control, either the insns to be executed if the | |
2548 branch is true or if the branch is false, THREAD_IF_TRUE says which. | |
2549 | |
2550 OPPOSITE_THREAD is the thread in the opposite direction. It is used | |
2551 to see if any potential delay slot insns set things needed there. | |
2552 | |
2553 LIKELY is nonzero if it is extremely likely that the branch will be | |
2554 taken and THREAD_IF_TRUE is set. This is used for the branch at the | |
2555 end of a loop back up to the top. | |
2556 | |
2557 OWN_THREAD and OWN_OPPOSITE_THREAD are true if we are the only user of the | |
2558 thread. I.e., it is the fallthrough code of our jump or the target of the | |
2559 jump when we are the only jump going there. | |
2560 | |
2561 If OWN_THREAD is false, it must be the "true" thread of a jump. In that | |
2562 case, we can only take insns from the head of the thread for our delay | |
2563 slot. We then adjust the jump to point after the insns we have taken. */ | |
2564 | |
2565 static rtx | |
2566 fill_slots_from_thread (rtx insn, rtx condition, rtx thread, | |
2567 rtx opposite_thread, int likely, int thread_if_true, | |
2568 int own_thread, int slots_to_fill, | |
2569 int *pslots_filled, rtx delay_list) | |
2570 { | |
2571 rtx new_thread; | |
2572 struct resources opposite_needed, set, needed; | |
2573 rtx trial; | |
2574 int lose = 0; | |
2575 int must_annul = 0; | |
2576 int flags; | |
2577 | |
2578 /* Validate our arguments. */ | |
2579 gcc_assert(condition != const_true_rtx || thread_if_true); | |
2580 gcc_assert(own_thread || thread_if_true); | |
2581 | |
2582 flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
2583 | |
2584 /* If our thread is the end of subroutine, we can't get any delay | |
2585 insns from that. */ | |
2586 if (thread == 0) | |
2587 return delay_list; | |
2588 | |
2589 /* If this is an unconditional branch, nothing is needed at the | |
2590 opposite thread. Otherwise, compute what is needed there. */ | |
2591 if (condition == const_true_rtx) | |
2592 CLEAR_RESOURCE (&opposite_needed); | |
2593 else | |
2594 mark_target_live_regs (get_insns (), opposite_thread, &opposite_needed); | |
2595 | |
2596 /* If the insn at THREAD can be split, do it here to avoid having to | |
2597 update THREAD and NEW_THREAD if it is done in the loop below. Also | |
2598 initialize NEW_THREAD. */ | |
2599 | |
2600 new_thread = thread = try_split (PATTERN (thread), thread, 0); | |
2601 | |
2602 /* Scan insns at THREAD. We are looking for an insn that can be removed | |
2603 from THREAD (it neither sets nor references resources that were set | |
2604 ahead of it and it doesn't set anything needs by the insns ahead of | |
2605 it) and that either can be placed in an annulling insn or aren't | |
2606 needed at OPPOSITE_THREAD. */ | |
2607 | |
2608 CLEAR_RESOURCE (&needed); | |
2609 CLEAR_RESOURCE (&set); | |
2610 | |
2611 /* If we do not own this thread, we must stop as soon as we find | |
2612 something that we can't put in a delay slot, since all we can do | |
2613 is branch into THREAD at a later point. Therefore, labels stop | |
2614 the search if this is not the `true' thread. */ | |
2615 | |
2616 for (trial = thread; | |
2617 ! stop_search_p (trial, ! thread_if_true) && (! lose || own_thread); | |
2618 trial = next_nonnote_insn (trial)) | |
2619 { | |
2620 rtx pat, old_trial; | |
2621 | |
2622 /* If we have passed a label, we no longer own this thread. */ | |
2623 if (LABEL_P (trial)) | |
2624 { | |
2625 own_thread = 0; | |
2626 continue; | |
2627 } | |
2628 | |
2629 pat = PATTERN (trial); | |
2630 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2631 continue; | |
2632 | |
2633 /* If TRIAL conflicts with the insns ahead of it, we lose. Also, | |
2634 don't separate or copy insns that set and use CC0. */ | |
2635 if (! insn_references_resource_p (trial, &set, 1) | |
2636 && ! insn_sets_resource_p (trial, &set, 1) | |
2637 && ! insn_sets_resource_p (trial, &needed, 1) | |
2638 #ifdef HAVE_cc0 | |
2639 && ! (reg_mentioned_p (cc0_rtx, pat) | |
2640 && (! own_thread || ! sets_cc0_p (pat))) | |
2641 #endif | |
2642 && ! can_throw_internal (trial)) | |
2643 { | |
2644 rtx prior_insn; | |
2645 | |
2646 /* If TRIAL is redundant with some insn before INSN, we don't | |
2647 actually need to add it to the delay list; we can merely pretend | |
2648 we did. */ | |
2649 if ((prior_insn = redundant_insn (trial, insn, delay_list))) | |
2650 { | |
2651 fix_reg_dead_note (prior_insn, insn); | |
2652 if (own_thread) | |
2653 { | |
2654 update_block (trial, thread); | |
2655 if (trial == thread) | |
2656 { | |
2657 thread = next_active_insn (thread); | |
2658 if (new_thread == trial) | |
2659 new_thread = thread; | |
2660 } | |
2661 | |
2662 delete_related_insns (trial); | |
2663 } | |
2664 else | |
2665 { | |
2666 update_reg_unused_notes (prior_insn, trial); | |
2667 new_thread = next_active_insn (trial); | |
2668 } | |
2669 | |
2670 continue; | |
2671 } | |
2672 | |
2673 /* There are two ways we can win: If TRIAL doesn't set anything | |
2674 needed at the opposite thread and can't trap, or if it can | |
2675 go into an annulled delay slot. */ | |
2676 if (!must_annul | |
2677 && (condition == const_true_rtx | |
2678 || (! insn_sets_resource_p (trial, &opposite_needed, 1) | |
2679 && ! may_trap_or_fault_p (pat)))) | |
2680 { | |
2681 old_trial = trial; | |
2682 trial = try_split (pat, trial, 0); | |
2683 if (new_thread == old_trial) | |
2684 new_thread = trial; | |
2685 if (thread == old_trial) | |
2686 thread = trial; | |
2687 pat = PATTERN (trial); | |
2688 if (eligible_for_delay (insn, *pslots_filled, trial, flags)) | |
2689 goto winner; | |
2690 } | |
2691 else if (0 | |
2692 #ifdef ANNUL_IFTRUE_SLOTS | |
2693 || ! thread_if_true | |
2694 #endif | |
2695 #ifdef ANNUL_IFFALSE_SLOTS | |
2696 || thread_if_true | |
2697 #endif | |
2698 ) | |
2699 { | |
2700 old_trial = trial; | |
2701 trial = try_split (pat, trial, 0); | |
2702 if (new_thread == old_trial) | |
2703 new_thread = trial; | |
2704 if (thread == old_trial) | |
2705 thread = trial; | |
2706 pat = PATTERN (trial); | |
2707 if ((must_annul || delay_list == NULL) && (thread_if_true | |
2708 ? check_annul_list_true_false (0, delay_list) | |
2709 && eligible_for_annul_false (insn, *pslots_filled, trial, flags) | |
2710 : check_annul_list_true_false (1, delay_list) | |
2711 && eligible_for_annul_true (insn, *pslots_filled, trial, flags))) | |
2712 { | |
2713 rtx temp; | |
2714 | |
2715 must_annul = 1; | |
2716 winner: | |
2717 | |
2718 #ifdef HAVE_cc0 | |
2719 if (reg_mentioned_p (cc0_rtx, pat)) | |
2720 link_cc0_insns (trial); | |
2721 #endif | |
2722 | |
2723 /* If we own this thread, delete the insn. If this is the | |
2724 destination of a branch, show that a basic block status | |
2725 may have been updated. In any case, mark the new | |
2726 starting point of this thread. */ | |
2727 if (own_thread) | |
2728 { | |
2729 rtx note; | |
2730 | |
2731 update_block (trial, thread); | |
2732 if (trial == thread) | |
2733 { | |
2734 thread = next_active_insn (thread); | |
2735 if (new_thread == trial) | |
2736 new_thread = thread; | |
2737 } | |
2738 | |
2739 /* We are moving this insn, not deleting it. We must | |
2740 temporarily increment the use count on any referenced | |
2741 label lest it be deleted by delete_related_insns. */ | |
2742 for (note = REG_NOTES (trial); | |
2743 note != NULL_RTX; | |
2744 note = XEXP (note, 1)) | |
2745 if (REG_NOTE_KIND (note) == REG_LABEL_OPERAND | |
2746 || REG_NOTE_KIND (note) == REG_LABEL_TARGET) | |
2747 { | |
2748 /* REG_LABEL_OPERAND could be | |
2749 NOTE_INSN_DELETED_LABEL too. */ | |
2750 if (LABEL_P (XEXP (note, 0))) | |
2751 LABEL_NUSES (XEXP (note, 0))++; | |
2752 else | |
2753 gcc_assert (REG_NOTE_KIND (note) | |
2754 == REG_LABEL_OPERAND); | |
2755 } | |
2756 if (JUMP_P (trial) && JUMP_LABEL (trial)) | |
2757 LABEL_NUSES (JUMP_LABEL (trial))++; | |
2758 | |
2759 delete_related_insns (trial); | |
2760 | |
2761 for (note = REG_NOTES (trial); | |
2762 note != NULL_RTX; | |
2763 note = XEXP (note, 1)) | |
2764 if (REG_NOTE_KIND (note) == REG_LABEL_OPERAND | |
2765 || REG_NOTE_KIND (note) == REG_LABEL_TARGET) | |
2766 { | |
2767 /* REG_LABEL_OPERAND could be | |
2768 NOTE_INSN_DELETED_LABEL too. */ | |
2769 if (LABEL_P (XEXP (note, 0))) | |
2770 LABEL_NUSES (XEXP (note, 0))--; | |
2771 else | |
2772 gcc_assert (REG_NOTE_KIND (note) | |
2773 == REG_LABEL_OPERAND); | |
2774 } | |
2775 if (JUMP_P (trial) && JUMP_LABEL (trial)) | |
2776 LABEL_NUSES (JUMP_LABEL (trial))--; | |
2777 } | |
2778 else | |
2779 new_thread = next_active_insn (trial); | |
2780 | |
2781 temp = own_thread ? trial : copy_rtx (trial); | |
2782 if (thread_if_true) | |
2783 INSN_FROM_TARGET_P (temp) = 1; | |
2784 | |
2785 delay_list = add_to_delay_list (temp, delay_list); | |
2786 | |
2787 if (slots_to_fill == ++(*pslots_filled)) | |
2788 { | |
2789 /* Even though we have filled all the slots, we | |
2790 may be branching to a location that has a | |
2791 redundant insn. Skip any if so. */ | |
2792 while (new_thread && ! own_thread | |
2793 && ! insn_sets_resource_p (new_thread, &set, 1) | |
2794 && ! insn_sets_resource_p (new_thread, &needed, 1) | |
2795 && ! insn_references_resource_p (new_thread, | |
2796 &set, 1) | |
2797 && (prior_insn | |
2798 = redundant_insn (new_thread, insn, | |
2799 delay_list))) | |
2800 { | |
2801 /* We know we do not own the thread, so no need | |
2802 to call update_block and delete_insn. */ | |
2803 fix_reg_dead_note (prior_insn, insn); | |
2804 update_reg_unused_notes (prior_insn, new_thread); | |
2805 new_thread = next_active_insn (new_thread); | |
2806 } | |
2807 break; | |
2808 } | |
2809 | |
2810 continue; | |
2811 } | |
2812 } | |
2813 } | |
2814 | |
2815 /* This insn can't go into a delay slot. */ | |
2816 lose = 1; | |
2817 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); | |
2818 mark_referenced_resources (trial, &needed, 1); | |
2819 | |
2820 /* Ensure we don't put insns between the setting of cc and the comparison | |
2821 by moving a setting of cc into an earlier delay slot since these insns | |
2822 could clobber the condition code. */ | |
2823 set.cc = 1; | |
2824 | |
2825 /* If this insn is a register-register copy and the next insn has | |
2826 a use of our destination, change it to use our source. That way, | |
2827 it will become a candidate for our delay slot the next time | |
2828 through this loop. This case occurs commonly in loops that | |
2829 scan a list. | |
2830 | |
2831 We could check for more complex cases than those tested below, | |
2832 but it doesn't seem worth it. It might also be a good idea to try | |
2833 to swap the two insns. That might do better. | |
2834 | |
2835 We can't do this if the next insn modifies our destination, because | |
2836 that would make the replacement into the insn invalid. We also can't | |
2837 do this if it modifies our source, because it might be an earlyclobber | |
2838 operand. This latter test also prevents updating the contents of | |
2839 a PRE_INC. We also can't do this if there's overlap of source and | |
2840 destination. Overlap may happen for larger-than-register-size modes. */ | |
2841 | |
2842 if (NONJUMP_INSN_P (trial) && GET_CODE (pat) == SET | |
2843 && REG_P (SET_SRC (pat)) | |
2844 && REG_P (SET_DEST (pat)) | |
2845 && !reg_overlap_mentioned_p (SET_DEST (pat), SET_SRC (pat))) | |
2846 { | |
2847 rtx next = next_nonnote_insn (trial); | |
2848 | |
2849 if (next && NONJUMP_INSN_P (next) | |
2850 && GET_CODE (PATTERN (next)) != USE | |
2851 && ! reg_set_p (SET_DEST (pat), next) | |
2852 && ! reg_set_p (SET_SRC (pat), next) | |
2853 && reg_referenced_p (SET_DEST (pat), PATTERN (next)) | |
2854 && ! modified_in_p (SET_DEST (pat), next)) | |
2855 validate_replace_rtx (SET_DEST (pat), SET_SRC (pat), next); | |
2856 } | |
2857 } | |
2858 | |
2859 /* If we stopped on a branch insn that has delay slots, see if we can | |
2860 steal some of the insns in those slots. */ | |
2861 if (trial && NONJUMP_INSN_P (trial) | |
2862 && GET_CODE (PATTERN (trial)) == SEQUENCE | |
2863 && JUMP_P (XVECEXP (PATTERN (trial), 0, 0))) | |
2864 { | |
2865 /* If this is the `true' thread, we will want to follow the jump, | |
2866 so we can only do this if we have taken everything up to here. */ | |
2867 if (thread_if_true && trial == new_thread) | |
2868 { | |
2869 delay_list | |
2870 = steal_delay_list_from_target (insn, condition, PATTERN (trial), | |
2871 delay_list, &set, &needed, | |
2872 &opposite_needed, slots_to_fill, | |
2873 pslots_filled, &must_annul, | |
2874 &new_thread); | |
2875 /* If we owned the thread and are told that it branched | |
2876 elsewhere, make sure we own the thread at the new location. */ | |
2877 if (own_thread && trial != new_thread) | |
2878 own_thread = own_thread_p (new_thread, new_thread, 0); | |
2879 } | |
2880 else if (! thread_if_true) | |
2881 delay_list | |
2882 = steal_delay_list_from_fallthrough (insn, condition, | |
2883 PATTERN (trial), | |
2884 delay_list, &set, &needed, | |
2885 &opposite_needed, slots_to_fill, | |
2886 pslots_filled, &must_annul); | |
2887 } | |
2888 | |
2889 /* If we haven't found anything for this delay slot and it is very | |
2890 likely that the branch will be taken, see if the insn at our target | |
2891 increments or decrements a register with an increment that does not | |
2892 depend on the destination register. If so, try to place the opposite | |
2893 arithmetic insn after the jump insn and put the arithmetic insn in the | |
2894 delay slot. If we can't do this, return. */ | |
2895 if (delay_list == 0 && likely && new_thread | |
2896 && NONJUMP_INSN_P (new_thread) | |
2897 && GET_CODE (PATTERN (new_thread)) != ASM_INPUT | |
2898 && asm_noperands (PATTERN (new_thread)) < 0) | |
2899 { | |
2900 rtx pat = PATTERN (new_thread); | |
2901 rtx dest; | |
2902 rtx src; | |
2903 | |
2904 trial = new_thread; | |
2905 pat = PATTERN (trial); | |
2906 | |
2907 if (!NONJUMP_INSN_P (trial) | |
2908 || GET_CODE (pat) != SET | |
2909 || ! eligible_for_delay (insn, 0, trial, flags) | |
2910 || can_throw_internal (trial)) | |
2911 return 0; | |
2912 | |
2913 dest = SET_DEST (pat), src = SET_SRC (pat); | |
2914 if ((GET_CODE (src) == PLUS || GET_CODE (src) == MINUS) | |
2915 && rtx_equal_p (XEXP (src, 0), dest) | |
2916 && (!FLOAT_MODE_P (GET_MODE (src)) | |
2917 || flag_unsafe_math_optimizations) | |
2918 && ! reg_overlap_mentioned_p (dest, XEXP (src, 1)) | |
2919 && ! side_effects_p (pat)) | |
2920 { | |
2921 rtx other = XEXP (src, 1); | |
2922 rtx new_arith; | |
2923 rtx ninsn; | |
2924 | |
2925 /* If this is a constant adjustment, use the same code with | |
2926 the negated constant. Otherwise, reverse the sense of the | |
2927 arithmetic. */ | |
2928 if (GET_CODE (other) == CONST_INT) | |
2929 new_arith = gen_rtx_fmt_ee (GET_CODE (src), GET_MODE (src), dest, | |
2930 negate_rtx (GET_MODE (src), other)); | |
2931 else | |
2932 new_arith = gen_rtx_fmt_ee (GET_CODE (src) == PLUS ? MINUS : PLUS, | |
2933 GET_MODE (src), dest, other); | |
2934 | |
2935 ninsn = emit_insn_after (gen_rtx_SET (VOIDmode, dest, new_arith), | |
2936 insn); | |
2937 | |
2938 if (recog_memoized (ninsn) < 0 | |
2939 || (extract_insn (ninsn), ! constrain_operands (1))) | |
2940 { | |
2941 delete_related_insns (ninsn); | |
2942 return 0; | |
2943 } | |
2944 | |
2945 if (own_thread) | |
2946 { | |
2947 update_block (trial, thread); | |
2948 if (trial == thread) | |
2949 { | |
2950 thread = next_active_insn (thread); | |
2951 if (new_thread == trial) | |
2952 new_thread = thread; | |
2953 } | |
2954 delete_related_insns (trial); | |
2955 } | |
2956 else | |
2957 new_thread = next_active_insn (trial); | |
2958 | |
2959 ninsn = own_thread ? trial : copy_rtx (trial); | |
2960 if (thread_if_true) | |
2961 INSN_FROM_TARGET_P (ninsn) = 1; | |
2962 | |
2963 delay_list = add_to_delay_list (ninsn, NULL_RTX); | |
2964 (*pslots_filled)++; | |
2965 } | |
2966 } | |
2967 | |
2968 if (delay_list && must_annul) | |
2969 INSN_ANNULLED_BRANCH_P (insn) = 1; | |
2970 | |
2971 /* If we are to branch into the middle of this thread, find an appropriate | |
2972 label or make a new one if none, and redirect INSN to it. If we hit the | |
2973 end of the function, use the end-of-function label. */ | |
2974 if (new_thread != thread) | |
2975 { | |
2976 rtx label; | |
2977 | |
2978 gcc_assert (thread_if_true); | |
2979 | |
2980 if (new_thread && JUMP_P (new_thread) | |
2981 && (simplejump_p (new_thread) | |
2982 || GET_CODE (PATTERN (new_thread)) == RETURN) | |
2983 && redirect_with_delay_list_safe_p (insn, | |
2984 JUMP_LABEL (new_thread), | |
2985 delay_list)) | |
2986 new_thread = follow_jumps (JUMP_LABEL (new_thread)); | |
2987 | |
2988 if (new_thread == 0) | |
2989 label = find_end_label (); | |
2990 else if (LABEL_P (new_thread)) | |
2991 label = new_thread; | |
2992 else | |
2993 label = get_label_before (new_thread); | |
2994 | |
2995 if (label) | |
2996 reorg_redirect_jump (insn, label); | |
2997 } | |
2998 | |
2999 return delay_list; | |
3000 } | |
3001 | |
3002 /* Make another attempt to find insns to place in delay slots. | |
3003 | |
3004 We previously looked for insns located in front of the delay insn | |
3005 and, for non-jump delay insns, located behind the delay insn. | |
3006 | |
3007 Here only try to schedule jump insns and try to move insns from either | |
3008 the target or the following insns into the delay slot. If annulling is | |
3009 supported, we will be likely to do this. Otherwise, we can do this only | |
3010 if safe. */ | |
3011 | |
3012 static void | |
3013 fill_eager_delay_slots (void) | |
3014 { | |
3015 rtx insn; | |
3016 int i; | |
3017 int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base; | |
3018 | |
3019 for (i = 0; i < num_unfilled_slots; i++) | |
3020 { | |
3021 rtx condition; | |
3022 rtx target_label, insn_at_target, fallthrough_insn; | |
3023 rtx delay_list = 0; | |
3024 int own_target; | |
3025 int own_fallthrough; | |
3026 int prediction, slots_to_fill, slots_filled; | |
3027 | |
3028 insn = unfilled_slots_base[i]; | |
3029 if (insn == 0 | |
3030 || INSN_DELETED_P (insn) | |
3031 || !JUMP_P (insn) | |
3032 || ! (condjump_p (insn) || condjump_in_parallel_p (insn))) | |
3033 continue; | |
3034 | |
3035 slots_to_fill = num_delay_slots (insn); | |
3036 /* Some machine description have defined instructions to have | |
3037 delay slots only in certain circumstances which may depend on | |
3038 nearby insns (which change due to reorg's actions). | |
3039 | |
3040 For example, the PA port normally has delay slots for unconditional | |
3041 jumps. | |
3042 | |
3043 However, the PA port claims such jumps do not have a delay slot | |
3044 if they are immediate successors of certain CALL_INSNs. This | |
3045 allows the port to favor filling the delay slot of the call with | |
3046 the unconditional jump. */ | |
3047 if (slots_to_fill == 0) | |
3048 continue; | |
3049 | |
3050 slots_filled = 0; | |
3051 target_label = JUMP_LABEL (insn); | |
3052 condition = get_branch_condition (insn, target_label); | |
3053 | |
3054 if (condition == 0) | |
3055 continue; | |
3056 | |
3057 /* Get the next active fallthrough and target insns and see if we own | |
3058 them. Then see whether the branch is likely true. We don't need | |
3059 to do a lot of this for unconditional branches. */ | |
3060 | |
3061 insn_at_target = next_active_insn (target_label); | |
3062 own_target = own_thread_p (target_label, target_label, 0); | |
3063 | |
3064 if (condition == const_true_rtx) | |
3065 { | |
3066 own_fallthrough = 0; | |
3067 fallthrough_insn = 0; | |
3068 prediction = 2; | |
3069 } | |
3070 else | |
3071 { | |
3072 fallthrough_insn = next_active_insn (insn); | |
3073 own_fallthrough = own_thread_p (NEXT_INSN (insn), NULL_RTX, 1); | |
3074 prediction = mostly_true_jump (insn, condition); | |
3075 } | |
3076 | |
3077 /* If this insn is expected to branch, first try to get insns from our | |
3078 target, then our fallthrough insns. If it is not expected to branch, | |
3079 try the other order. */ | |
3080 | |
3081 if (prediction > 0) | |
3082 { | |
3083 delay_list | |
3084 = fill_slots_from_thread (insn, condition, insn_at_target, | |
3085 fallthrough_insn, prediction == 2, 1, | |
3086 own_target, | |
3087 slots_to_fill, &slots_filled, delay_list); | |
3088 | |
3089 if (delay_list == 0 && own_fallthrough) | |
3090 { | |
3091 /* Even though we didn't find anything for delay slots, | |
3092 we might have found a redundant insn which we deleted | |
3093 from the thread that was filled. So we have to recompute | |
3094 the next insn at the target. */ | |
3095 target_label = JUMP_LABEL (insn); | |
3096 insn_at_target = next_active_insn (target_label); | |
3097 | |
3098 delay_list | |
3099 = fill_slots_from_thread (insn, condition, fallthrough_insn, | |
3100 insn_at_target, 0, 0, | |
3101 own_fallthrough, | |
3102 slots_to_fill, &slots_filled, | |
3103 delay_list); | |
3104 } | |
3105 } | |
3106 else | |
3107 { | |
3108 if (own_fallthrough) | |
3109 delay_list | |
3110 = fill_slots_from_thread (insn, condition, fallthrough_insn, | |
3111 insn_at_target, 0, 0, | |
3112 own_fallthrough, | |
3113 slots_to_fill, &slots_filled, | |
3114 delay_list); | |
3115 | |
3116 if (delay_list == 0) | |
3117 delay_list | |
3118 = fill_slots_from_thread (insn, condition, insn_at_target, | |
3119 next_active_insn (insn), 0, 1, | |
3120 own_target, | |
3121 slots_to_fill, &slots_filled, | |
3122 delay_list); | |
3123 } | |
3124 | |
3125 if (delay_list) | |
3126 unfilled_slots_base[i] | |
3127 = emit_delay_sequence (insn, delay_list, slots_filled); | |
3128 | |
3129 if (slots_to_fill == slots_filled) | |
3130 unfilled_slots_base[i] = 0; | |
3131 | |
3132 note_delay_statistics (slots_filled, 1); | |
3133 } | |
3134 } | |
3135 | |
3136 static void delete_computation (rtx insn); | |
3137 | |
3138 /* Recursively delete prior insns that compute the value (used only by INSN | |
3139 which the caller is deleting) stored in the register mentioned by NOTE | |
3140 which is a REG_DEAD note associated with INSN. */ | |
3141 | |
3142 static void | |
3143 delete_prior_computation (rtx note, rtx insn) | |
3144 { | |
3145 rtx our_prev; | |
3146 rtx reg = XEXP (note, 0); | |
3147 | |
3148 for (our_prev = prev_nonnote_insn (insn); | |
3149 our_prev && (NONJUMP_INSN_P (our_prev) | |
3150 || CALL_P (our_prev)); | |
3151 our_prev = prev_nonnote_insn (our_prev)) | |
3152 { | |
3153 rtx pat = PATTERN (our_prev); | |
3154 | |
3155 /* If we reach a CALL which is not calling a const function | |
3156 or the callee pops the arguments, then give up. */ | |
3157 if (CALL_P (our_prev) | |
3158 && (! RTL_CONST_CALL_P (our_prev) | |
3159 || GET_CODE (pat) != SET || GET_CODE (SET_SRC (pat)) != CALL)) | |
3160 break; | |
3161 | |
3162 /* If we reach a SEQUENCE, it is too complex to try to | |
3163 do anything with it, so give up. We can be run during | |
3164 and after reorg, so SEQUENCE rtl can legitimately show | |
3165 up here. */ | |
3166 if (GET_CODE (pat) == SEQUENCE) | |
3167 break; | |
3168 | |
3169 if (GET_CODE (pat) == USE | |
3170 && NONJUMP_INSN_P (XEXP (pat, 0))) | |
3171 /* reorg creates USEs that look like this. We leave them | |
3172 alone because reorg needs them for its own purposes. */ | |
3173 break; | |
3174 | |
3175 if (reg_set_p (reg, pat)) | |
3176 { | |
3177 if (side_effects_p (pat) && !CALL_P (our_prev)) | |
3178 break; | |
3179 | |
3180 if (GET_CODE (pat) == PARALLEL) | |
3181 { | |
3182 /* If we find a SET of something else, we can't | |
3183 delete the insn. */ | |
3184 | |
3185 int i; | |
3186 | |
3187 for (i = 0; i < XVECLEN (pat, 0); i++) | |
3188 { | |
3189 rtx part = XVECEXP (pat, 0, i); | |
3190 | |
3191 if (GET_CODE (part) == SET | |
3192 && SET_DEST (part) != reg) | |
3193 break; | |
3194 } | |
3195 | |
3196 if (i == XVECLEN (pat, 0)) | |
3197 delete_computation (our_prev); | |
3198 } | |
3199 else if (GET_CODE (pat) == SET | |
3200 && REG_P (SET_DEST (pat))) | |
3201 { | |
3202 int dest_regno = REGNO (SET_DEST (pat)); | |
3203 int dest_endregno = END_REGNO (SET_DEST (pat)); | |
3204 int regno = REGNO (reg); | |
3205 int endregno = END_REGNO (reg); | |
3206 | |
3207 if (dest_regno >= regno | |
3208 && dest_endregno <= endregno) | |
3209 delete_computation (our_prev); | |
3210 | |
3211 /* We may have a multi-word hard register and some, but not | |
3212 all, of the words of the register are needed in subsequent | |
3213 insns. Write REG_UNUSED notes for those parts that were not | |
3214 needed. */ | |
3215 else if (dest_regno <= regno | |
3216 && dest_endregno >= endregno) | |
3217 { | |
3218 int i; | |
3219 | |
3220 add_reg_note (our_prev, REG_UNUSED, reg); | |
3221 | |
3222 for (i = dest_regno; i < dest_endregno; i++) | |
3223 if (! find_regno_note (our_prev, REG_UNUSED, i)) | |
3224 break; | |
3225 | |
3226 if (i == dest_endregno) | |
3227 delete_computation (our_prev); | |
3228 } | |
3229 } | |
3230 | |
3231 break; | |
3232 } | |
3233 | |
3234 /* If PAT references the register that dies here, it is an | |
3235 additional use. Hence any prior SET isn't dead. However, this | |
3236 insn becomes the new place for the REG_DEAD note. */ | |
3237 if (reg_overlap_mentioned_p (reg, pat)) | |
3238 { | |
3239 XEXP (note, 1) = REG_NOTES (our_prev); | |
3240 REG_NOTES (our_prev) = note; | |
3241 break; | |
3242 } | |
3243 } | |
3244 } | |
3245 | |
3246 /* Delete INSN and recursively delete insns that compute values used only | |
3247 by INSN. This uses the REG_DEAD notes computed during flow analysis. | |
3248 If we are running before flow.c, we need do nothing since flow.c will | |
3249 delete dead code. We also can't know if the registers being used are | |
3250 dead or not at this point. | |
3251 | |
3252 Otherwise, look at all our REG_DEAD notes. If a previous insn does | |
3253 nothing other than set a register that dies in this insn, we can delete | |
3254 that insn as well. | |
3255 | |
3256 On machines with CC0, if CC0 is used in this insn, we may be able to | |
3257 delete the insn that set it. */ | |
3258 | |
3259 static void | |
3260 delete_computation (rtx insn) | |
3261 { | |
3262 rtx note, next; | |
3263 | |
3264 #ifdef HAVE_cc0 | |
3265 if (reg_referenced_p (cc0_rtx, PATTERN (insn))) | |
3266 { | |
3267 rtx prev = prev_nonnote_insn (insn); | |
3268 /* We assume that at this stage | |
3269 CC's are always set explicitly | |
3270 and always immediately before the jump that | |
3271 will use them. So if the previous insn | |
3272 exists to set the CC's, delete it | |
3273 (unless it performs auto-increments, etc.). */ | |
3274 if (prev && NONJUMP_INSN_P (prev) | |
3275 && sets_cc0_p (PATTERN (prev))) | |
3276 { | |
3277 if (sets_cc0_p (PATTERN (prev)) > 0 | |
3278 && ! side_effects_p (PATTERN (prev))) | |
3279 delete_computation (prev); | |
3280 else | |
3281 /* Otherwise, show that cc0 won't be used. */ | |
3282 add_reg_note (prev, REG_UNUSED, cc0_rtx); | |
3283 } | |
3284 } | |
3285 #endif | |
3286 | |
3287 for (note = REG_NOTES (insn); note; note = next) | |
3288 { | |
3289 next = XEXP (note, 1); | |
3290 | |
3291 if (REG_NOTE_KIND (note) != REG_DEAD | |
3292 /* Verify that the REG_NOTE is legitimate. */ | |
3293 || !REG_P (XEXP (note, 0))) | |
3294 continue; | |
3295 | |
3296 delete_prior_computation (note, insn); | |
3297 } | |
3298 | |
3299 delete_related_insns (insn); | |
3300 } | |
3301 | |
3302 /* If all INSN does is set the pc, delete it, | |
3303 and delete the insn that set the condition codes for it | |
3304 if that's what the previous thing was. */ | |
3305 | |
3306 static void | |
3307 delete_jump (rtx insn) | |
3308 { | |
3309 rtx set = single_set (insn); | |
3310 | |
3311 if (set && GET_CODE (SET_DEST (set)) == PC) | |
3312 delete_computation (insn); | |
3313 } | |
3314 | |
3315 | |
3316 /* Once we have tried two ways to fill a delay slot, make a pass over the | |
3317 code to try to improve the results and to do such things as more jump | |
3318 threading. */ | |
3319 | |
3320 static void | |
3321 relax_delay_slots (rtx first) | |
3322 { | |
3323 rtx insn, next, pat; | |
3324 rtx trial, delay_insn, target_label; | |
3325 | |
3326 /* Look at every JUMP_INSN and see if we can improve it. */ | |
3327 for (insn = first; insn; insn = next) | |
3328 { | |
3329 rtx other; | |
3330 | |
3331 next = next_active_insn (insn); | |
3332 | |
3333 /* If this is a jump insn, see if it now jumps to a jump, jumps to | |
3334 the next insn, or jumps to a label that is not the last of a | |
3335 group of consecutive labels. */ | |
3336 if (JUMP_P (insn) | |
3337 && (condjump_p (insn) || condjump_in_parallel_p (insn)) | |
3338 && (target_label = JUMP_LABEL (insn)) != 0) | |
3339 { | |
3340 target_label = skip_consecutive_labels (follow_jumps (target_label)); | |
3341 if (target_label == 0) | |
3342 target_label = find_end_label (); | |
3343 | |
3344 if (target_label && next_active_insn (target_label) == next | |
3345 && ! condjump_in_parallel_p (insn)) | |
3346 { | |
3347 delete_jump (insn); | |
3348 continue; | |
3349 } | |
3350 | |
3351 if (target_label && target_label != JUMP_LABEL (insn)) | |
3352 reorg_redirect_jump (insn, target_label); | |
3353 | |
3354 /* See if this jump conditionally branches around an unconditional | |
3355 jump. If so, invert this jump and point it to the target of the | |
3356 second jump. */ | |
3357 if (next && JUMP_P (next) | |
3358 && any_condjump_p (insn) | |
3359 && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN) | |
3360 && target_label | |
3361 && next_active_insn (target_label) == next_active_insn (next) | |
3362 && no_labels_between_p (insn, next)) | |
3363 { | |
3364 rtx label = JUMP_LABEL (next); | |
3365 | |
3366 /* Be careful how we do this to avoid deleting code or | |
3367 labels that are momentarily dead. See similar optimization | |
3368 in jump.c. | |
3369 | |
3370 We also need to ensure we properly handle the case when | |
3371 invert_jump fails. */ | |
3372 | |
3373 ++LABEL_NUSES (target_label); | |
3374 if (label) | |
3375 ++LABEL_NUSES (label); | |
3376 | |
3377 if (invert_jump (insn, label, 1)) | |
3378 { | |
3379 delete_related_insns (next); | |
3380 next = insn; | |
3381 } | |
3382 | |
3383 if (label) | |
3384 --LABEL_NUSES (label); | |
3385 | |
3386 if (--LABEL_NUSES (target_label) == 0) | |
3387 delete_related_insns (target_label); | |
3388 | |
3389 continue; | |
3390 } | |
3391 } | |
3392 | |
3393 /* If this is an unconditional jump and the previous insn is a | |
3394 conditional jump, try reversing the condition of the previous | |
3395 insn and swapping our targets. The next pass might be able to | |
3396 fill the slots. | |
3397 | |
3398 Don't do this if we expect the conditional branch to be true, because | |
3399 we would then be making the more common case longer. */ | |
3400 | |
3401 if (JUMP_P (insn) | |
3402 && (simplejump_p (insn) || GET_CODE (PATTERN (insn)) == RETURN) | |
3403 && (other = prev_active_insn (insn)) != 0 | |
3404 && any_condjump_p (other) | |
3405 && no_labels_between_p (other, insn) | |
3406 && 0 > mostly_true_jump (other, | |
3407 get_branch_condition (other, | |
3408 JUMP_LABEL (other)))) | |
3409 { | |
3410 rtx other_target = JUMP_LABEL (other); | |
3411 target_label = JUMP_LABEL (insn); | |
3412 | |
3413 if (invert_jump (other, target_label, 0)) | |
3414 reorg_redirect_jump (insn, other_target); | |
3415 } | |
3416 | |
3417 /* Now look only at cases where we have filled a delay slot. */ | |
3418 if (!NONJUMP_INSN_P (insn) | |
3419 || GET_CODE (PATTERN (insn)) != SEQUENCE) | |
3420 continue; | |
3421 | |
3422 pat = PATTERN (insn); | |
3423 delay_insn = XVECEXP (pat, 0, 0); | |
3424 | |
3425 /* See if the first insn in the delay slot is redundant with some | |
3426 previous insn. Remove it from the delay slot if so; then set up | |
3427 to reprocess this insn. */ | |
3428 if (redundant_insn (XVECEXP (pat, 0, 1), delay_insn, 0)) | |
3429 { | |
3430 delete_from_delay_slot (XVECEXP (pat, 0, 1)); | |
3431 next = prev_active_insn (next); | |
3432 continue; | |
3433 } | |
3434 | |
3435 /* See if we have a RETURN insn with a filled delay slot followed | |
3436 by a RETURN insn with an unfilled a delay slot. If so, we can delete | |
3437 the first RETURN (but not its delay insn). This gives the same | |
3438 effect in fewer instructions. | |
3439 | |
3440 Only do so if optimizing for size since this results in slower, but | |
3441 smaller code. */ | |
3442 if (optimize_function_for_size_p (cfun) | |
3443 && GET_CODE (PATTERN (delay_insn)) == RETURN | |
3444 && next | |
3445 && JUMP_P (next) | |
3446 && GET_CODE (PATTERN (next)) == RETURN) | |
3447 { | |
3448 rtx after; | |
3449 int i; | |
3450 | |
3451 /* Delete the RETURN and just execute the delay list insns. | |
3452 | |
3453 We do this by deleting the INSN containing the SEQUENCE, then | |
3454 re-emitting the insns separately, and then deleting the RETURN. | |
3455 This allows the count of the jump target to be properly | |
3456 decremented. */ | |
3457 | |
3458 /* Clear the from target bit, since these insns are no longer | |
3459 in delay slots. */ | |
3460 for (i = 0; i < XVECLEN (pat, 0); i++) | |
3461 INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0; | |
3462 | |
3463 trial = PREV_INSN (insn); | |
3464 delete_related_insns (insn); | |
3465 gcc_assert (GET_CODE (pat) == SEQUENCE); | |
3466 after = trial; | |
3467 for (i = 0; i < XVECLEN (pat, 0); i++) | |
3468 { | |
3469 rtx this_insn = XVECEXP (pat, 0, i); | |
3470 add_insn_after (this_insn, after, NULL); | |
3471 after = this_insn; | |
3472 } | |
3473 delete_scheduled_jump (delay_insn); | |
3474 continue; | |
3475 } | |
3476 | |
3477 /* Now look only at the cases where we have a filled JUMP_INSN. */ | |
3478 if (!JUMP_P (XVECEXP (PATTERN (insn), 0, 0)) | |
3479 || ! (condjump_p (XVECEXP (PATTERN (insn), 0, 0)) | |
3480 || condjump_in_parallel_p (XVECEXP (PATTERN (insn), 0, 0)))) | |
3481 continue; | |
3482 | |
3483 target_label = JUMP_LABEL (delay_insn); | |
3484 | |
3485 if (target_label) | |
3486 { | |
3487 /* If this jump goes to another unconditional jump, thread it, but | |
3488 don't convert a jump into a RETURN here. */ | |
3489 trial = skip_consecutive_labels (follow_jumps (target_label)); | |
3490 if (trial == 0) | |
3491 trial = find_end_label (); | |
3492 | |
3493 if (trial && trial != target_label | |
3494 && redirect_with_delay_slots_safe_p (delay_insn, trial, insn)) | |
3495 { | |
3496 reorg_redirect_jump (delay_insn, trial); | |
3497 target_label = trial; | |
3498 } | |
3499 | |
3500 /* If the first insn at TARGET_LABEL is redundant with a previous | |
3501 insn, redirect the jump to the following insn process again. */ | |
3502 trial = next_active_insn (target_label); | |
3503 if (trial && GET_CODE (PATTERN (trial)) != SEQUENCE | |
3504 && redundant_insn (trial, insn, 0) | |
3505 && ! can_throw_internal (trial)) | |
3506 { | |
3507 /* Figure out where to emit the special USE insn so we don't | |
3508 later incorrectly compute register live/death info. */ | |
3509 rtx tmp = next_active_insn (trial); | |
3510 if (tmp == 0) | |
3511 tmp = find_end_label (); | |
3512 | |
3513 if (tmp) | |
3514 { | |
3515 /* Insert the special USE insn and update dataflow info. */ | |
3516 update_block (trial, tmp); | |
3517 | |
3518 /* Now emit a label before the special USE insn, and | |
3519 redirect our jump to the new label. */ | |
3520 target_label = get_label_before (PREV_INSN (tmp)); | |
3521 reorg_redirect_jump (delay_insn, target_label); | |
3522 next = insn; | |
3523 continue; | |
3524 } | |
3525 } | |
3526 | |
3527 /* Similarly, if it is an unconditional jump with one insn in its | |
3528 delay list and that insn is redundant, thread the jump. */ | |
3529 if (trial && GET_CODE (PATTERN (trial)) == SEQUENCE | |
3530 && XVECLEN (PATTERN (trial), 0) == 2 | |
3531 && JUMP_P (XVECEXP (PATTERN (trial), 0, 0)) | |
3532 && (simplejump_p (XVECEXP (PATTERN (trial), 0, 0)) | |
3533 || GET_CODE (PATTERN (XVECEXP (PATTERN (trial), 0, 0))) == RETURN) | |
3534 && redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0)) | |
3535 { | |
3536 target_label = JUMP_LABEL (XVECEXP (PATTERN (trial), 0, 0)); | |
3537 if (target_label == 0) | |
3538 target_label = find_end_label (); | |
3539 | |
3540 if (target_label | |
3541 && redirect_with_delay_slots_safe_p (delay_insn, target_label, | |
3542 insn)) | |
3543 { | |
3544 reorg_redirect_jump (delay_insn, target_label); | |
3545 next = insn; | |
3546 continue; | |
3547 } | |
3548 } | |
3549 } | |
3550 | |
3551 if (! INSN_ANNULLED_BRANCH_P (delay_insn) | |
3552 && prev_active_insn (target_label) == insn | |
3553 && ! condjump_in_parallel_p (delay_insn) | |
3554 #ifdef HAVE_cc0 | |
3555 /* If the last insn in the delay slot sets CC0 for some insn, | |
3556 various code assumes that it is in a delay slot. We could | |
3557 put it back where it belonged and delete the register notes, | |
3558 but it doesn't seem worthwhile in this uncommon case. */ | |
3559 && ! find_reg_note (XVECEXP (pat, 0, XVECLEN (pat, 0) - 1), | |
3560 REG_CC_USER, NULL_RTX) | |
3561 #endif | |
3562 ) | |
3563 { | |
3564 rtx after; | |
3565 int i; | |
3566 | |
3567 /* All this insn does is execute its delay list and jump to the | |
3568 following insn. So delete the jump and just execute the delay | |
3569 list insns. | |
3570 | |
3571 We do this by deleting the INSN containing the SEQUENCE, then | |
3572 re-emitting the insns separately, and then deleting the jump. | |
3573 This allows the count of the jump target to be properly | |
3574 decremented. */ | |
3575 | |
3576 /* Clear the from target bit, since these insns are no longer | |
3577 in delay slots. */ | |
3578 for (i = 0; i < XVECLEN (pat, 0); i++) | |
3579 INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0; | |
3580 | |
3581 trial = PREV_INSN (insn); | |
3582 delete_related_insns (insn); | |
3583 gcc_assert (GET_CODE (pat) == SEQUENCE); | |
3584 after = trial; | |
3585 for (i = 0; i < XVECLEN (pat, 0); i++) | |
3586 { | |
3587 rtx this_insn = XVECEXP (pat, 0, i); | |
3588 add_insn_after (this_insn, after, NULL); | |
3589 after = this_insn; | |
3590 } | |
3591 delete_scheduled_jump (delay_insn); | |
3592 continue; | |
3593 } | |
3594 | |
3595 /* See if this is an unconditional jump around a single insn which is | |
3596 identical to the one in its delay slot. In this case, we can just | |
3597 delete the branch and the insn in its delay slot. */ | |
3598 if (next && NONJUMP_INSN_P (next) | |
3599 && prev_label (next_active_insn (next)) == target_label | |
3600 && simplejump_p (insn) | |
3601 && XVECLEN (pat, 0) == 2 | |
3602 && rtx_equal_p (PATTERN (next), PATTERN (XVECEXP (pat, 0, 1)))) | |
3603 { | |
3604 delete_related_insns (insn); | |
3605 continue; | |
3606 } | |
3607 | |
3608 /* See if this jump (with its delay slots) conditionally branches | |
3609 around an unconditional jump (without delay slots). If so, invert | |
3610 this jump and point it to the target of the second jump. We cannot | |
3611 do this for annulled jumps, though. Again, don't convert a jump to | |
3612 a RETURN here. */ | |
3613 if (! INSN_ANNULLED_BRANCH_P (delay_insn) | |
3614 && any_condjump_p (delay_insn) | |
3615 && next && JUMP_P (next) | |
3616 && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN) | |
3617 && next_active_insn (target_label) == next_active_insn (next) | |
3618 && no_labels_between_p (insn, next)) | |
3619 { | |
3620 rtx label = JUMP_LABEL (next); | |
3621 rtx old_label = JUMP_LABEL (delay_insn); | |
3622 | |
3623 if (label == 0) | |
3624 label = find_end_label (); | |
3625 | |
3626 /* find_end_label can generate a new label. Check this first. */ | |
3627 if (label | |
3628 && no_labels_between_p (insn, next) | |
3629 && redirect_with_delay_slots_safe_p (delay_insn, label, insn)) | |
3630 { | |
3631 /* Be careful how we do this to avoid deleting code or labels | |
3632 that are momentarily dead. See similar optimization in | |
3633 jump.c */ | |
3634 if (old_label) | |
3635 ++LABEL_NUSES (old_label); | |
3636 | |
3637 if (invert_jump (delay_insn, label, 1)) | |
3638 { | |
3639 int i; | |
3640 | |
3641 /* Must update the INSN_FROM_TARGET_P bits now that | |
3642 the branch is reversed, so that mark_target_live_regs | |
3643 will handle the delay slot insn correctly. */ | |
3644 for (i = 1; i < XVECLEN (PATTERN (insn), 0); i++) | |
3645 { | |
3646 rtx slot = XVECEXP (PATTERN (insn), 0, i); | |
3647 INSN_FROM_TARGET_P (slot) = ! INSN_FROM_TARGET_P (slot); | |
3648 } | |
3649 | |
3650 delete_related_insns (next); | |
3651 next = insn; | |
3652 } | |
3653 | |
3654 if (old_label && --LABEL_NUSES (old_label) == 0) | |
3655 delete_related_insns (old_label); | |
3656 continue; | |
3657 } | |
3658 } | |
3659 | |
3660 /* If we own the thread opposite the way this insn branches, see if we | |
3661 can merge its delay slots with following insns. */ | |
3662 if (INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1)) | |
3663 && own_thread_p (NEXT_INSN (insn), 0, 1)) | |
3664 try_merge_delay_insns (insn, next); | |
3665 else if (! INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1)) | |
3666 && own_thread_p (target_label, target_label, 0)) | |
3667 try_merge_delay_insns (insn, next_active_insn (target_label)); | |
3668 | |
3669 /* If we get here, we haven't deleted INSN. But we may have deleted | |
3670 NEXT, so recompute it. */ | |
3671 next = next_active_insn (insn); | |
3672 } | |
3673 } | |
3674 | |
3675 #ifdef HAVE_return | |
3676 | |
3677 /* Look for filled jumps to the end of function label. We can try to convert | |
3678 them into RETURN insns if the insns in the delay slot are valid for the | |
3679 RETURN as well. */ | |
3680 | |
3681 static void | |
3682 make_return_insns (rtx first) | |
3683 { | |
3684 rtx insn, jump_insn, pat; | |
3685 rtx real_return_label = end_of_function_label; | |
3686 int slots, i; | |
3687 | |
3688 #ifdef DELAY_SLOTS_FOR_EPILOGUE | |
3689 /* If a previous pass filled delay slots in the epilogue, things get a | |
3690 bit more complicated, as those filler insns would generally (without | |
3691 data flow analysis) have to be executed after any existing branch | |
3692 delay slot filler insns. It is also unknown whether such a | |
3693 transformation would actually be profitable. Note that the existing | |
3694 code only cares for branches with (some) filled delay slots. */ | |
3695 if (crtl->epilogue_delay_list != NULL) | |
3696 return; | |
3697 #endif | |
3698 | |
3699 /* See if there is a RETURN insn in the function other than the one we | |
3700 made for END_OF_FUNCTION_LABEL. If so, set up anything we can't change | |
3701 into a RETURN to jump to it. */ | |
3702 for (insn = first; insn; insn = NEXT_INSN (insn)) | |
3703 if (JUMP_P (insn) && GET_CODE (PATTERN (insn)) == RETURN) | |
3704 { | |
3705 real_return_label = get_label_before (insn); | |
3706 break; | |
3707 } | |
3708 | |
3709 /* Show an extra usage of REAL_RETURN_LABEL so it won't go away if it | |
3710 was equal to END_OF_FUNCTION_LABEL. */ | |
3711 LABEL_NUSES (real_return_label)++; | |
3712 | |
3713 /* Clear the list of insns to fill so we can use it. */ | |
3714 obstack_free (&unfilled_slots_obstack, unfilled_firstobj); | |
3715 | |
3716 for (insn = first; insn; insn = NEXT_INSN (insn)) | |
3717 { | |
3718 int flags; | |
3719 | |
3720 /* Only look at filled JUMP_INSNs that go to the end of function | |
3721 label. */ | |
3722 if (!NONJUMP_INSN_P (insn) | |
3723 || GET_CODE (PATTERN (insn)) != SEQUENCE | |
3724 || !JUMP_P (XVECEXP (PATTERN (insn), 0, 0)) | |
3725 || JUMP_LABEL (XVECEXP (PATTERN (insn), 0, 0)) != end_of_function_label) | |
3726 continue; | |
3727 | |
3728 pat = PATTERN (insn); | |
3729 jump_insn = XVECEXP (pat, 0, 0); | |
3730 | |
3731 /* If we can't make the jump into a RETURN, try to redirect it to the best | |
3732 RETURN and go on to the next insn. */ | |
3733 if (! reorg_redirect_jump (jump_insn, NULL_RTX)) | |
3734 { | |
3735 /* Make sure redirecting the jump will not invalidate the delay | |
3736 slot insns. */ | |
3737 if (redirect_with_delay_slots_safe_p (jump_insn, | |
3738 real_return_label, | |
3739 insn)) | |
3740 reorg_redirect_jump (jump_insn, real_return_label); | |
3741 continue; | |
3742 } | |
3743 | |
3744 /* See if this RETURN can accept the insns current in its delay slot. | |
3745 It can if it has more or an equal number of slots and the contents | |
3746 of each is valid. */ | |
3747 | |
3748 flags = get_jump_flags (jump_insn, JUMP_LABEL (jump_insn)); | |
3749 slots = num_delay_slots (jump_insn); | |
3750 if (slots >= XVECLEN (pat, 0) - 1) | |
3751 { | |
3752 for (i = 1; i < XVECLEN (pat, 0); i++) | |
3753 if (! ( | |
3754 #ifdef ANNUL_IFFALSE_SLOTS | |
3755 (INSN_ANNULLED_BRANCH_P (jump_insn) | |
3756 && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
3757 ? eligible_for_annul_false (jump_insn, i - 1, | |
3758 XVECEXP (pat, 0, i), flags) : | |
3759 #endif | |
3760 #ifdef ANNUL_IFTRUE_SLOTS | |
3761 (INSN_ANNULLED_BRANCH_P (jump_insn) | |
3762 && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
3763 ? eligible_for_annul_true (jump_insn, i - 1, | |
3764 XVECEXP (pat, 0, i), flags) : | |
3765 #endif | |
3766 eligible_for_delay (jump_insn, i - 1, | |
3767 XVECEXP (pat, 0, i), flags))) | |
3768 break; | |
3769 } | |
3770 else | |
3771 i = 0; | |
3772 | |
3773 if (i == XVECLEN (pat, 0)) | |
3774 continue; | |
3775 | |
3776 /* We have to do something with this insn. If it is an unconditional | |
3777 RETURN, delete the SEQUENCE and output the individual insns, | |
3778 followed by the RETURN. Then set things up so we try to find | |
3779 insns for its delay slots, if it needs some. */ | |
3780 if (GET_CODE (PATTERN (jump_insn)) == RETURN) | |
3781 { | |
3782 rtx prev = PREV_INSN (insn); | |
3783 | |
3784 delete_related_insns (insn); | |
3785 for (i = 1; i < XVECLEN (pat, 0); i++) | |
3786 prev = emit_insn_after (PATTERN (XVECEXP (pat, 0, i)), prev); | |
3787 | |
3788 insn = emit_jump_insn_after (PATTERN (jump_insn), prev); | |
3789 emit_barrier_after (insn); | |
3790 | |
3791 if (slots) | |
3792 obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
3793 } | |
3794 else | |
3795 /* It is probably more efficient to keep this with its current | |
3796 delay slot as a branch to a RETURN. */ | |
3797 reorg_redirect_jump (jump_insn, real_return_label); | |
3798 } | |
3799 | |
3800 /* Now delete REAL_RETURN_LABEL if we never used it. Then try to fill any | |
3801 new delay slots we have created. */ | |
3802 if (--LABEL_NUSES (real_return_label) == 0) | |
3803 delete_related_insns (real_return_label); | |
3804 | |
3805 fill_simple_delay_slots (1); | |
3806 fill_simple_delay_slots (0); | |
3807 } | |
3808 #endif | |
3809 | |
3810 /* Try to find insns to place in delay slots. */ | |
3811 | |
3812 void | |
3813 dbr_schedule (rtx first) | |
3814 { | |
3815 rtx insn, next, epilogue_insn = 0; | |
3816 int i; | |
3817 | |
3818 /* If the current function has no insns other than the prologue and | |
3819 epilogue, then do not try to fill any delay slots. */ | |
3820 if (n_basic_blocks == NUM_FIXED_BLOCKS) | |
3821 return; | |
3822 | |
3823 /* Find the highest INSN_UID and allocate and initialize our map from | |
3824 INSN_UID's to position in code. */ | |
3825 for (max_uid = 0, insn = first; insn; insn = NEXT_INSN (insn)) | |
3826 { | |
3827 if (INSN_UID (insn) > max_uid) | |
3828 max_uid = INSN_UID (insn); | |
3829 if (NOTE_P (insn) | |
3830 && NOTE_KIND (insn) == NOTE_INSN_EPILOGUE_BEG) | |
3831 epilogue_insn = insn; | |
3832 } | |
3833 | |
3834 uid_to_ruid = XNEWVEC (int, max_uid + 1); | |
3835 for (i = 0, insn = first; insn; i++, insn = NEXT_INSN (insn)) | |
3836 uid_to_ruid[INSN_UID (insn)] = i; | |
3837 | |
3838 /* Initialize the list of insns that need filling. */ | |
3839 if (unfilled_firstobj == 0) | |
3840 { | |
3841 gcc_obstack_init (&unfilled_slots_obstack); | |
3842 unfilled_firstobj = XOBNEWVAR (&unfilled_slots_obstack, rtx, 0); | |
3843 } | |
3844 | |
3845 for (insn = next_active_insn (first); insn; insn = next_active_insn (insn)) | |
3846 { | |
3847 rtx target; | |
3848 | |
3849 INSN_ANNULLED_BRANCH_P (insn) = 0; | |
3850 INSN_FROM_TARGET_P (insn) = 0; | |
3851 | |
3852 /* Skip vector tables. We can't get attributes for them. */ | |
3853 if (JUMP_P (insn) | |
3854 && (GET_CODE (PATTERN (insn)) == ADDR_VEC | |
3855 || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC)) | |
3856 continue; | |
3857 | |
3858 if (num_delay_slots (insn) > 0) | |
3859 obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
3860 | |
3861 /* Ensure all jumps go to the last of a set of consecutive labels. */ | |
3862 if (JUMP_P (insn) | |
3863 && (condjump_p (insn) || condjump_in_parallel_p (insn)) | |
3864 && JUMP_LABEL (insn) != 0 | |
3865 && ((target = skip_consecutive_labels (JUMP_LABEL (insn))) | |
3866 != JUMP_LABEL (insn))) | |
3867 redirect_jump (insn, target, 1); | |
3868 } | |
3869 | |
3870 init_resource_info (epilogue_insn); | |
3871 | |
3872 /* Show we haven't computed an end-of-function label yet. */ | |
3873 end_of_function_label = 0; | |
3874 | |
3875 /* Initialize the statistics for this function. */ | |
3876 memset (num_insns_needing_delays, 0, sizeof num_insns_needing_delays); | |
3877 memset (num_filled_delays, 0, sizeof num_filled_delays); | |
3878 | |
3879 /* Now do the delay slot filling. Try everything twice in case earlier | |
3880 changes make more slots fillable. */ | |
3881 | |
3882 for (reorg_pass_number = 0; | |
3883 reorg_pass_number < MAX_REORG_PASSES; | |
3884 reorg_pass_number++) | |
3885 { | |
3886 fill_simple_delay_slots (1); | |
3887 fill_simple_delay_slots (0); | |
3888 fill_eager_delay_slots (); | |
3889 relax_delay_slots (first); | |
3890 } | |
3891 | |
3892 /* If we made an end of function label, indicate that it is now | |
3893 safe to delete it by undoing our prior adjustment to LABEL_NUSES. | |
3894 If it is now unused, delete it. */ | |
3895 if (end_of_function_label && --LABEL_NUSES (end_of_function_label) == 0) | |
3896 delete_related_insns (end_of_function_label); | |
3897 | |
3898 #ifdef HAVE_return | |
3899 if (HAVE_return && end_of_function_label != 0) | |
3900 make_return_insns (first); | |
3901 #endif | |
3902 | |
3903 /* Delete any USE insns made by update_block; subsequent passes don't need | |
3904 them or know how to deal with them. */ | |
3905 for (insn = first; insn; insn = next) | |
3906 { | |
3907 next = NEXT_INSN (insn); | |
3908 | |
3909 if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == USE | |
3910 && INSN_P (XEXP (PATTERN (insn), 0))) | |
3911 next = delete_related_insns (insn); | |
3912 } | |
3913 | |
3914 obstack_free (&unfilled_slots_obstack, unfilled_firstobj); | |
3915 | |
3916 /* It is not clear why the line below is needed, but it does seem to be. */ | |
3917 unfilled_firstobj = XOBNEWVAR (&unfilled_slots_obstack, rtx, 0); | |
3918 | |
3919 if (dump_file) | |
3920 { | |
3921 int i, j, need_comma; | |
3922 int total_delay_slots[MAX_DELAY_HISTOGRAM + 1]; | |
3923 int total_annul_slots[MAX_DELAY_HISTOGRAM + 1]; | |
3924 | |
3925 for (reorg_pass_number = 0; | |
3926 reorg_pass_number < MAX_REORG_PASSES; | |
3927 reorg_pass_number++) | |
3928 { | |
3929 fprintf (dump_file, ";; Reorg pass #%d:\n", reorg_pass_number + 1); | |
3930 for (i = 0; i < NUM_REORG_FUNCTIONS; i++) | |
3931 { | |
3932 need_comma = 0; | |
3933 fprintf (dump_file, ";; Reorg function #%d\n", i); | |
3934 | |
3935 fprintf (dump_file, ";; %d insns needing delay slots\n;; ", | |
3936 num_insns_needing_delays[i][reorg_pass_number]); | |
3937 | |
3938 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) | |
3939 if (num_filled_delays[i][j][reorg_pass_number]) | |
3940 { | |
3941 if (need_comma) | |
3942 fprintf (dump_file, ", "); | |
3943 need_comma = 1; | |
3944 fprintf (dump_file, "%d got %d delays", | |
3945 num_filled_delays[i][j][reorg_pass_number], j); | |
3946 } | |
3947 fprintf (dump_file, "\n"); | |
3948 } | |
3949 } | |
3950 memset (total_delay_slots, 0, sizeof total_delay_slots); | |
3951 memset (total_annul_slots, 0, sizeof total_annul_slots); | |
3952 for (insn = first; insn; insn = NEXT_INSN (insn)) | |
3953 { | |
3954 if (! INSN_DELETED_P (insn) | |
3955 && NONJUMP_INSN_P (insn) | |
3956 && GET_CODE (PATTERN (insn)) != USE | |
3957 && GET_CODE (PATTERN (insn)) != CLOBBER) | |
3958 { | |
3959 if (GET_CODE (PATTERN (insn)) == SEQUENCE) | |
3960 { | |
3961 j = XVECLEN (PATTERN (insn), 0) - 1; | |
3962 if (j > MAX_DELAY_HISTOGRAM) | |
3963 j = MAX_DELAY_HISTOGRAM; | |
3964 if (INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (insn), 0, 0))) | |
3965 total_annul_slots[j]++; | |
3966 else | |
3967 total_delay_slots[j]++; | |
3968 } | |
3969 else if (num_delay_slots (insn) > 0) | |
3970 total_delay_slots[0]++; | |
3971 } | |
3972 } | |
3973 fprintf (dump_file, ";; Reorg totals: "); | |
3974 need_comma = 0; | |
3975 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) | |
3976 { | |
3977 if (total_delay_slots[j]) | |
3978 { | |
3979 if (need_comma) | |
3980 fprintf (dump_file, ", "); | |
3981 need_comma = 1; | |
3982 fprintf (dump_file, "%d got %d delays", total_delay_slots[j], j); | |
3983 } | |
3984 } | |
3985 fprintf (dump_file, "\n"); | |
3986 #if defined (ANNUL_IFTRUE_SLOTS) || defined (ANNUL_IFFALSE_SLOTS) | |
3987 fprintf (dump_file, ";; Reorg annuls: "); | |
3988 need_comma = 0; | |
3989 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) | |
3990 { | |
3991 if (total_annul_slots[j]) | |
3992 { | |
3993 if (need_comma) | |
3994 fprintf (dump_file, ", "); | |
3995 need_comma = 1; | |
3996 fprintf (dump_file, "%d got %d delays", total_annul_slots[j], j); | |
3997 } | |
3998 } | |
3999 fprintf (dump_file, "\n"); | |
4000 #endif | |
4001 fprintf (dump_file, "\n"); | |
4002 } | |
4003 | |
4004 /* For all JUMP insns, fill in branch prediction notes, so that during | |
4005 assembler output a target can set branch prediction bits in the code. | |
4006 We have to do this now, as up until this point the destinations of | |
4007 JUMPS can be moved around and changed, but past right here that cannot | |
4008 happen. */ | |
4009 for (insn = first; insn; insn = NEXT_INSN (insn)) | |
4010 { | |
4011 int pred_flags; | |
4012 | |
4013 if (NONJUMP_INSN_P (insn)) | |
4014 { | |
4015 rtx pat = PATTERN (insn); | |
4016 | |
4017 if (GET_CODE (pat) == SEQUENCE) | |
4018 insn = XVECEXP (pat, 0, 0); | |
4019 } | |
4020 if (!JUMP_P (insn)) | |
4021 continue; | |
4022 | |
4023 pred_flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
4024 add_reg_note (insn, REG_BR_PRED, GEN_INT (pred_flags)); | |
4025 } | |
4026 free_resource_info (); | |
4027 free (uid_to_ruid); | |
4028 #ifdef DELAY_SLOTS_FOR_EPILOGUE | |
4029 /* SPARC assembler, for instance, emit warning when debug info is output | |
4030 into the delay slot. */ | |
4031 { | |
4032 rtx link; | |
4033 | |
4034 for (link = crtl->epilogue_delay_list; | |
4035 link; | |
4036 link = XEXP (link, 1)) | |
4037 INSN_LOCATOR (XEXP (link, 0)) = 0; | |
4038 } | |
4039 | |
4040 #endif | |
4041 crtl->dbr_scheduled_p = true; | |
4042 } | |
4043 #endif /* DELAY_SLOTS */ | |
4044 | |
4045 static bool | |
4046 gate_handle_delay_slots (void) | |
4047 { | |
4048 #ifdef DELAY_SLOTS | |
4049 /* At -O0 dataflow info isn't updated after RA. */ | |
4050 return optimize > 0 && flag_delayed_branch && !crtl->dbr_scheduled_p; | |
4051 #else | |
4052 return 0; | |
4053 #endif | |
4054 } | |
4055 | |
4056 /* Run delay slot optimization. */ | |
4057 static unsigned int | |
4058 rest_of_handle_delay_slots (void) | |
4059 { | |
4060 #ifdef DELAY_SLOTS | |
4061 dbr_schedule (get_insns ()); | |
4062 #endif | |
4063 return 0; | |
4064 } | |
4065 | |
4066 struct rtl_opt_pass pass_delay_slots = | |
4067 { | |
4068 { | |
4069 RTL_PASS, | |
4070 "dbr", /* name */ | |
4071 gate_handle_delay_slots, /* gate */ | |
4072 rest_of_handle_delay_slots, /* execute */ | |
4073 NULL, /* sub */ | |
4074 NULL, /* next */ | |
4075 0, /* static_pass_number */ | |
4076 TV_DBR_SCHED, /* tv_id */ | |
4077 0, /* properties_required */ | |
4078 0, /* properties_provided */ | |
4079 0, /* properties_destroyed */ | |
4080 0, /* todo_flags_start */ | |
4081 TODO_dump_func | | |
4082 TODO_ggc_collect /* todo_flags_finish */ | |
4083 } | |
4084 }; | |
4085 | |
4086 /* Machine dependent reorg pass. */ | |
4087 static bool | |
4088 gate_handle_machine_reorg (void) | |
4089 { | |
4090 return targetm.machine_dependent_reorg != 0; | |
4091 } | |
4092 | |
4093 | |
4094 static unsigned int | |
4095 rest_of_handle_machine_reorg (void) | |
4096 { | |
4097 targetm.machine_dependent_reorg (); | |
4098 return 0; | |
4099 } | |
4100 | |
4101 struct rtl_opt_pass pass_machine_reorg = | |
4102 { | |
4103 { | |
4104 RTL_PASS, | |
4105 "mach", /* name */ | |
4106 gate_handle_machine_reorg, /* gate */ | |
4107 rest_of_handle_machine_reorg, /* execute */ | |
4108 NULL, /* sub */ | |
4109 NULL, /* next */ | |
4110 0, /* static_pass_number */ | |
4111 TV_MACH_DEP, /* tv_id */ | |
4112 0, /* properties_required */ | |
4113 0, /* properties_provided */ | |
4114 0, /* properties_destroyed */ | |
4115 0, /* todo_flags_start */ | |
4116 TODO_dump_func | | |
4117 TODO_ggc_collect /* todo_flags_finish */ | |
4118 } | |
4119 }; |